Evolution of the test environment: Interview with Igor Khrol (Toptal) and Anton Semenchenko (COMAQA.BY and CoreHard)

We do not like to wait in line, we want to make an order online, we are not ready to buy a ticket at the box office, let everything be in the application, in electronic form. And here there is an important “But”! We all want here and now, but for this to work without fail, like a clock. Pizza delivery was done on time, the place in the cinema coincided with the one received in the confirmation. What plays one of the key roles in all this variety of applications and services?
Of course - this is a test environment, without which it is impossible to quickly release a quality product! Modern testing tools broke into our lives like a hurricane and literally in a few years changed our capabilities. We took leaps and bounds in virtualization and containerization, tried the Selenium line, argued about the advantages and disadvantages of Docker.
Why was all this necessary and what have we come to?
What future awaits us?
Let's talk "for testing" with the guru of the profession. Let's go from A to Z on the toolkit. Igor Khrol and Anton Semenchenko will help us in this.
We stock up on coffee, tea, other drinks and start. The conversation will be long.
So, Igor Khrol is a testing automation specialist at Toptal. Igor has extensive experience working with most popular tools (Selenium, HP QTP, TestComplete, JMeter). - Igor, good afternoon. I propose to start our conversation. The first question will be formulated as follows - more and more companies are moving away from the “checked on my computer” option to creating full-fledged testing departments with highly qualified specialists in this field. Do you see a trend in this, or continue to save on testing?
Kind. I allow myself to disagree with the very formulation of the question. The classical testing departments, which take a certain assembly of the product for testing and issue a list of defects, no longer meet the modern business and software development speed. We become more Agile (the word is already quite hackneyed) when there is a testing specialist inside the project team who is ready to quickly help the development. Of course, testing engineers communicate with each other, but as such there is no department with its own managerial structure. In this format it employs many advanced companies (as an example here is very well written about Spotify). Testing is increasingly being integrated into the development process.
Life is becoming very fast, so you need to quickly change, quickly roll out new releases, customers do not want to wait a week. The formalized procedure, when collected, given away, received the test result in a week, does not work so well.
About saving. I would not say that it is present, especially in the context of the environment. The cost of iron in recent years has fallen, and significantly. And the amount that a company can lose due to bugs is disproportionately higher. Therefore, the option of checking only on your computer may be found in some cases, but definitely not a trend. I do not know companies that, for the sake of economy, could not buy a server or donate money to create a good test environment.
- The first testing tools began to appear in the mid-90s. Do you agree with the statement that active development began precisely in this period, or did it just become the foundation for the “construction" of high-tech products today?
The first xUnit systems appeared a long time ago ( Wikipedia says about the year 1989) and so far there haven’t been such a large number of user interfaces, this was probably enough. Then, in the late nineties, the beginning of the two thousandth, when there were more user interfaces, the first tools for UI began to appear. One of the first in my practice was WinRunner (appeared in 1995) and Watir (there are versions from 2005 ).
Then and now
Testing has always been, because if you wrote or did something, you must definitely check. In honor of this, is the day of the tester, originating in 1945 .
If we talk about the topic of the test environment, then I would not say that there are special tools specifically for preparing the test environment. What we do is apply approaches to deploy production environments: Docker, Puppet, Ansible and other related solutions. They were created in order to have a reproducible result. As an effect, we can clone our production environment and test it safely, as close as possible to reality.
Previously, these were a hundred-sheet instructions, and new testing environments were deployed for months, but now the approaches are much better. Everything has become automated, everything is in the code: I launched the script - the environment is configured. Therefore, I would not call these things testing tools, it is rather a topic of DevOps, administration.
- Igor, please tell us about your first “acquaintance” with decisions in this area. When did you, for example, get to know the VMware platform? Are you currently using these software products?
If we talk about VMware, then it really was a pretty good help, then, when we need to test products in different operating systems. To date, it has evolved into clouds, such as Amazon or Google Cloud. If I need a test environment, then I launched a script or wrote a Slack bot - and I have a working server.
Used a couple of hours / days / weeks and muffled this thing. For continuous integration at Toptalit's all automated to the maximum. The code in github was pushed, and it somewhere there itself lifted the necessary number of servers in Google Cloud, ran tests and wrote to me in pull request if there were any regression problems. Locally, sometimes you have to raise virtual machines to test some specific things: for example, understand how the xlsx report will look in Microsoft Office on Windows.
- Many developers like to separate the test environment into clean (with a pre-installed OS and minimal software) and dirty (as close as possible to the prod version). Do you think this distinction is appropriate? Do not you think that there can be a huge number of such options, and it is better not to consider them only in this vein?
Watching for what tasks. If you run unit tests, you need a minimal set of software: something to run your code (JVM, interpreter ...) and that's it. If you are testing a separate microservice or component of a system, then perhaps you don’t have to run a lot of things, but only have what works that interests you. Practice shows that to have some kind of staging or preproduction (as they call it), as close as possible to the combat environment, is very useful for final checks, acceptance tests. The maximum proximity will be everything: the same hardware, in minor versions and patches on software, a complete data set, etc.
- The process of preparing the test environment before and now is very different? What tools are appropriate and in what cases? Those. Do I need to choose a solution depending on the size of the company, or are the tools now easily scalable?
When I started working, the test environment was created according to some instructions. Or some unknown admins in general. As they grow older, testers have already begun to check these instructions, to test the deployment process. Gradually, everything went towards greater automation and, as a result, reproducibility of the result, reduction of the human factor. Nowadays, setting up the environment is rarely done by hand - most likely, it will be ansible, puppet or something similar. As a result, the test environment is as close as possible to production, and we do not test what will not be on prod'e.
- Have you used / are using container technologies in your daily work? Have you watched the Red Hat rancherOS or Atomic solution ?
The topic of docker / containers and add-ons is now booming. Does this affect the test environment? Of course it does. You can already get a working system for tests in a couple of clicks, which is good. At Toptal, our containers are mainly used for testing:
- the most prepared container reduces the time to prepare for launch
- if you need several components for integration testing, then they can be easily obtained by running several containers connected to each other
- After all the steps with creating identical configs and applications, the question arises of transferring data. In which case is it advisable to support the test environment as a mirrored version of prod? An important point is the depersonalization of the database with data. How do you feel about this practice when submitted to testing?
A complete data set is most often needed for acceptance tests, when you need to look at the final result. Some of the problems can not be found if you have little data or they do not look like real ones. Performance tests in many cases are also carried out on real data.
Depersonalization is a good, necessary practice. I don’t really want password hashes or even your customer lists to get into the outside world from the test environment.
Selenium
- Now there is an active implementation and use of Selenium (product line) in the test environment. Have you seen how this project is being modified, growing with new functionality? What can you say about Selenium WebDriver in its current state?
I actively monitor the development of Selenium'a and use it since version 0.8. I don’t see the point of retelling the whole story, since the site selenium2.ru is very well written about it .
Speaking about the current state of the project, the most significant can be called the fact that the project has become the w3c standard for browsers, and the main browser manufacturers themselves implement drivers.
The ecosystem around WebDriver also does not stand still. Perhaps it is developing even faster than the WebDriver API. If earlier any project to automate testing began with writing a custom framework, now the use of self-written solutions is considered bad form. In almost any language, there are already ready-made libraries that allow you not to write once again how to correctly check for elements on a page or work with AJAX. In Java: HtmlElements , Selenide . In the Ruby world: Capybara and page-object . Webium in Python.
Of the good resources dedicated to Selenium, I can recommend the recordings of the SeleniumCamp conference. I regularly participate there and I like the way the theme develops every year.
- In your opinion, what specific tools for creating test environments did you manage to do perfectly, by five, over the past few years? What services are definitely worth a try now? Maybe there are developing projects that you should pay attention to now?
The topic of creating test environments is closely related to DevOps, and as such, separate tools for testers somehow do not occur. I consider it a remarkable achievement that now the phrase “And it works for me” is less common, as the environments are becoming more and more the same. The keywords in this regard are ansible, puppet, docker, vagrant. Deployment scripts have become an integral part of projects and delivery.
One cannot but mention cloud solutions (AWS, Google Cloud, DigitalOcean). Previously, everyone bought a server and a bunch of indignation arose when trying to give something to third parties. Now few companies can afford their own data centers, and there is no need.
Of the promising areas, I can point out cloud-based solutions where you generally do not have servers as such that need to be overloaded, install updates and spend time in every possible way, instead of useful activities. Push'nul code - and this code in a test environment or on prod'e. This is Heroku, Google App Engine.
- Thanks for answers. We will be waiting for your next performances.
Anton Semenchenko is an activist of the automation community www.COMAQA.BY and the “harsh” development of C ++ and www.CoreHard.by . Main specialization: automated testing, low-level development in C ++ and below. - Anton, good evening. The topic of our conversation is “Evolution of the test environment”. I propose a little touch on the history of your formation as a specialist, and in the development we will talk about testing.
Kind! OK.
“How did you end up in this whole story?” The process of transition to testing was in parallel with the processes of development of this segment in the company where you worked, or is it exclusively your choice?
Everything turned out by chance, but this is certainly my choice. I, like any other broad-based IT specialist, was indirectly involved in testing. Quality assurance of the final product is a complex complex process, therefore, coding standards, code review, unit tests, pair programming, formal discussions of key sections of the code, and work with ERD, PRD are all elements of Quality Assurance (hereinafter QA) in which developers participate. Therefore, the general QA cycle was clear to me, I certainly took part in ensuring quality.
There were projects entirely related to QA, for example, Unit testing, for example, from Symantec there was a request for covering with the Unit tests the core of its flagship products developed in pure C in the early 80s. On the one hand, this is a difficult technical task for development, on the other hand, one hundred percent QA. We dealt exclusively with Unit tests of what, in principle, was not intended for testing. Sometimes there were functions with cyclomatic complexity of 250 or more. That's it, we arrived, for a month you can deal with only one function in order to understand how to cover it with tests. So I, of course, was associated with QA, but specifically, indirectly.
Automated Testing
At his last job at ISsoft, the idea came up to open an independent automated testing department. The company had automation, but, firstly, we did not offer automation as a service, but as a resource. There are experts, you yourself formulate their goals, tasks, processes, and they work in the style of "I can dig, I can not dig." There was a desire, rather, a business need, to reach a new level of both quality of service and the technical component of solutions. Secondly, due to the tasks that colleagues had been facing for years, there were no guys in the company who were ready to tackle these “challenges”, with all due respect to their professionalism.
I received a proposal to organize such a department from scratch. This role required different skills both in software development and in the ability to work with people, because it was necessary to assemble a wonderful team, otherwise “there is no way to move mountains”. The choice fell on me not only and not so much because of the exclusively technical component, but according to a combination of factors. The “appointment” seemed interesting to me, as I thought then, and apparently was not mistaken - test automation is an unconditional trend.
I started working in this direction about four years ago. I was very lucky, I immediately found the guys who fit perfectly into the team, complemented each other, in fact, the backbone was assembled: Andrei Stakhievich, Vadim Zubovich and many others, known from numerous conferences, publications, trainings and other activities, professionals. Without a team, I would not physically cope with these tasks.
Naturally, to understand what automation will be tomorrow, how to properly develop and sell expertise, you need to understand what it is today. The simplest and most correct solution is to ask specialists. We started actively attending conferences, participating in the role of listeners, plus developing prototypes. For example, let's take twenty tools that are now in the top, and write on each a prototype for a certain type of application, conduct a comparative analysis, and draw our conclusions. It turned out good overview information, which most companies simply did not have.
Representatives of other companies knew very deeply one or two directions, but for twenty, there was no such wide coverage. Plus, we saw the problem of the information vacuum, at least four years ago, units of specialists knew what automation is today, and were ready to share their “sacramental” knowledge. We began to make presentations ourselves, to “fumble” information, so the idea came to organize a community www.COMAQA.BY , whose goal is to build an effective platform for communication between specialists directly or indirectly related to test automation.
It was clear that the region is developing so dynamically, so wide and multifaceted that it is impossible to cover it with the help of one company, this requires the work of very different specialists, from different companies, better from different countries. Now we are actively moving around the CIS, trying to cooperate, only in the fall I will participate in 25 events across all corners of Russia and Belarus. Something like this I came to this interesting area. I can’t say that it was an exceptional choice of mine. If I hadn’t received such an offer, if I hadn’t been able to put together a wonderful team, this would not have happened. In many respects it is the merit of the guys that I am now in automation.
- Is it possible to say that the preparation of the correct test environment and the testing process itself gradually become the standard during the preparation and release of the product?
It seems to me that this is a very complex, controversial question, it should be divided into several, into a whole group of questions. I will now voice my subjective opinion, perhaps many experts will not agree with him, the more interesting it is to see the heated debate. In my opinion, it’s practically impossible to bring anything conceptually new in approaches to organizing a certain process, it doesn’t matter whether we are talking about managing a feudal castle or about the software development process. In the broad sense of the word, Socrates in the formulation of Plato was the first object-oriented programmer. He had archetypes, categories (hierarchies), imperfect implementations of archetypes in our world, etc. If this idea is developed and applied to IT, then here is OOP with classes, meta-classes, objects, and other technical attributes.
In fact, already in the fifties there were specialists responsible for the installation and organization of stands, formal test environments, serious test plans and other documentation, and in a much more stringent, standardized form than today. This was dictated by the fact that “hardware” was very expensive, so that “machine time” was spent extremely economically, real gurus were allowed to the computer, who extremely carefully, correctly configured the environment.
It is hard to believe, but the development of hardware and software described by Frederick Brooks in his canonical book Mythical Man-Month is the second most expensive scientific project in the history of mankind, the first is the American space program. Today we can incorrectly organize env and skip the defect. In the past, specialists, in addition to the standard “minuses”, received a situation where tens and hundreds of thousands of dollars were wasted here and now, because “machine time” was expensive “space”.
On the other hand, today very, very much has changed fundamentally. The number of software products is growing exponentially, the complexity of the average software product is falling exponentially. If in the sixties an exceptionally huge, extremely complex software was developed for basic science, banking systems, military, today it can be a pet’s web page, the task is incredibly simpler. But there is a third “side of the coin”, due to the fact that the quantity is growing, a different environment is developing, there is a transition of quantity to a new quality according to Hegel.
The very “leap” to a new round of the Hegelian curve is dictated by the necessity by virtue of the law of Sedov's “Hierarchical Compensations”. Developing “applied” thought, many different operating systems and their versions, browsers and other “user-oriented” bindings, as well as those. component, such as a version of the JVM or .Net Framework, of “means” in the broad sense of the word, physical and virtual environment, very different hardware - these are the realities of today.
It seems to me that the virtualization of the 60s and the innovative test environment today are just different turns of one dialectical spiral. In search of the optimum, IT specialists and end consumers are thrown from one extreme to another with access to a new, fundamentally different technological level. Sometimes the transition is so “acute” that we encounter the problem of an ultra-sharp exit from the comfort zone, begin to look for new ways to resolve IT calls, which are, in fact, long forgotten old, but they certainly need to be understood in a new context with the involvement of other specialists.
On the one hand, I cannot talk about “fundamental” changes, on the other, I cannot deny “qualitative” growth, since the number of different environments grows exponentially and there has always been a problem of cross-integration and integration. It’s one thing when we have n variants of the environment, and it’s completely different - e to the power of n, and we begin to “connect” them. This problem of combinatorics and dynamic environment as a whole is now sooooo hot.
I can’t give a definite answer to such a complex initially formulated question, except perhaps the following, theoretically significant, similar to an excuse: “Today's test environment is the next round of the dialectical Hegelian spiral, and the diameter of the spiral is gradually decreasing, and the“ step ”is growing, realizing trend, you can see the "turning point" of the next round, using the past developments in the new context, prepare for it in time, the main thing is not to go to the "dialectical vertical", or, using a synergistic term by logic, not to go to the “Pannov-Snooks vertical” in the area of surroundings - we will either build a Skynet, or climb to the top, once and for all cope with combinatorial complexity by introducing a huge number of layers of abstractions, forget how the “bare” iron works, What is “hands-on” system administration,
Virtualization
Take the same virtualization on which most cloud services are now built, take containerization as a stage in the development of virtualization. Virtualization prototypes appeared in the fifties, the first industrial serial virtualization, if my memory serves me, came out in the sixty-fourth year.
This is not to say that virtualization is something new (novelty, “obsolete” for fifty years), on the other hand, in its current form, when there are many different virtualization engines and all of them need to be “connected” with each other, the “challenge” is fundamentally different . Even a separate class of applications has appeared, the only task of which is to “make friends” different virtualization engines in a uniform and efficient way, to build a single management API, it does not matter if it is a low level CLI or a common UI, then some add-on can go.
I will give a few indirect personal examples. For many years I worked on the development of data protection solutions - endless variations of backup and data recovery. One of the very popular tasks in the recent past, today and, I am sure, tomorrow is the “cunning” bare metal restore [2].
Abstract situation "from the head": a physical machine on 32-bit Intel-based hardware with Windows 2000; by back-up and bare metal restore it takes you dozens of minutes, ideally, with one click, to the 64-bit AMD hardware with Windows Server 2008 ... and if you add a pinch of different virtualization engine to this "rich mess" ... and if try to solve the problem "hot", without turning off the machine from the point of view of the consumer "service"? Such non-trivial transformations are very in demand.
3 years ago there was a real IT klondike, many large companies tried to squeeze into this gold market, subsets of bare metal restore solutions, including (my “youthful maximalism” is visible to the naked eye) my DPI.Solutions startup. As soon as Windows XP officially ceased to support, banks frantically began to search for a safe, fast and feasible way of mass transition to a new OS, because by their internal policies they did not have the right to remain on the "OS" without relevant security packs.
Due to the inertia of any large organization and huge expenses for OS updates, banks until the last day hoped for continued support for Windows XP from Microsoft and did not initiate the transition. As a result, there was a “fatal” situation for the bank: within two months or six months, transfer hundreds of thousands of cars, each employee, in each tiny branch to the next version of the OS. Everything, you can shoot.
The specialized bare metal restore solved a similar problem in the specified "terrible" restrictions. I made backup machines with Windows XP and deployed it completely ready to work, with all the “historical” information and settings, already on the new or old hardware with fresh Windows, where the security pack updates are coming out. Many companies specialized in this area. This is an indirect, but vivid situation, illustrating the current complexity of the "environment".
Youth problems
It seems to me that the formal preparation of the test environment came to us only today, largely because IT in the CIS is a “young” professional field, a generation gap has occurred in the memory of our teachers / teachers.
First, Soviet IT, very powerful, vibrant, with its pros and cons, to put it mildly, it’s hard for me to judge this time, these are stories “from the wrong side”. Then we didn’t have IT at all, or almost none. Rapid development began in the two thousandths: the first companies were created everywhere, but they were not large, IT was not the mainstream, salaries, and hence the flow of talented personnel, were relatively low, comparable with other engineering areas.
Since 2005, active growth has begun, the industry has been developing for ten years, let it be fifteen; to be honest, we are still teenagers, with the classic "teenage" problems of ignorance and overestimation, rediscovering what our fathers had long known. What to go far, let's look at our colleagues, take countries with physically less IT. For example, the concept of "business analyst" in Moldova appeared a few years ago. It is clear that this function has always been, but the concept of a role appeared only a few years ago - a vivid illustration of the "youth of the profession", because we in the CIS are "boiling in one juice."
To take another example, colleagues from Uzbekistan shared their "sore". They have the entire market of IT specialists - from a thousand to two thousand people, there is no critical mass necessary for a qualitative leap. There are only a few training centers, you can count on the fingers of one hand the whole capital; almost all IT-specialists are generalists, they know and know everything, but at the top.
We suffer from diseases, immaturity, global IT is developing very fast, our IT-market is growing very actively, which means that the difficulties, including the difficulties of "failure", are being raised to a power.
If we talk about DevOps specialists, how long has this term appeared in Russia? Only the last three years have been actively used in Belarus, no more. We see with our own eyes the “growing up” and at the same time the development of the IT industry as exemplified by Minsk. So, four years ago, there was one major specialized IT conference per quarter, no more, some of the areas were not covered in principle.
Now every day there is at least a small meeting, a meet-up devoted to one of the IT specializations, you can engage in full-time IT education of the broadest profile every day, there would be a desire.
The quantity and quality of events is growing fundamentally. Take, for example, QA. Three years ago in Belarus there was not a single active community dedicated to testing automation, and only one dedicated to Quality Assurance in general, organizing 1-2 small technical events a year, with 3-4 excellent reports based on SQA Days and 20-40 listeners. The guys regularly met in an informal setting, created the conditions for personal communication, played the Mafia and other games. They did a very important and useful thing, but this initiative, with all my deep respect for the participants, cannot be called a professional community, and “cozy gatherings” - conferences or even meet-ups.
The guys were pioneers, for which many thanks to them. Today, only the COMAQA.by community holds 4 full-scale free or shareware conferences per year dedicated to QA Automation, and there are regular readings at universities and schools, cooperation with many providers of IT courses, meet-ups and webinars. The next event will be held on November 5-6: 2 days, 2 streams, 8 workshops, 16 reports, more than 500 full-time listeners, online broadcasting in the CIS. One of the workshops will be dedicated to Docker. All the necessary information to participate as a listener or speaker can be found on the community website. We will be very happy to share our knowledge and best practices.
Another striking example. About a year ago, together with my colleagues, I created the CoreHard community, which brought together experts in "harsh" development, primarily in C ++ and below. Today we hold 4 conferences a year. The next event on October 22: 11 speakers from the CIS and the USA, more than 350 full-time listeners, online broadcasting in the CIS. The CoreHard event will bring together great speak-ers: this is Anton Polukhin, an active Boost developer, author of the book “Boost C ++ Application Development Cookbook”, a representative on the international committee for standardization of C ++ from Russia; and Yegor Kishilov - has been working at Microsoft for more than 8 years, all this time developing the Bing search engine; and Svyatoslav Razmyslov - head of the department engaged in the development of the PVS-Studio analyzer core for analyzing C and C ++ code; and Evgeny Okhotnikov, an independent developer with more than 20 years of experience, engaged in OpenSource tools to simplify the development of multi-threaded applications in C ++; and many, many other great specialists. Many thanks to colleagues who are ready to share their knowledge.
And this is just a small example, the closest special case to me. Dozens of active communities function in Minsk, every day there is a certain event. For several years we have seen a qualitative, fundamental change in the industry.
I am sure our “youthful” problems will soon pass, including due to active communication through communities, conferences and other wonderful undertakings.
- A high-quality test environment now is not only a bunch of software and hardware, including containerization systems and cloud services, but also a well-thought-out ideology? How to develop a valid test plan?
I will be brief. It seems to me that a test plan is definitely needed, and a test environment is a mandatory part, regardless of the size of the software and the chosen methodology, informal documentation or strict adherence to standards.
The environment as an integral part of the test plan must be taken into account, and "individually" at each stage of software development. Problems with the awareness of the need for a formal approach to the organization of the test environment, if they may arise, are solely due to the "youth" of the industry in the CIS, due to insufficient knowledge of specialists. In my subjective opinion, in order to systematize knowledge, ISTQB certification is very useful.
One can agree with the indicated terminology or discuss hotly (I, rather, am “not a compromiser”), but a single consistent basis is very important. The main thing is that ISTQB pays close attention to both the test plan and its components, including the test environment, so the certification textbook should be used to acquire relevant knowledge.
I can say the following about Minsk, today there are no open courses or management training in testing. From the COMAQA.by community, we want to launch an eight-hour workshop on testing metrics as an integral part of management, devoted primarily to the practice and, to a lesser extent, the theory of their use. As an advertisement, I’ll inform you that I am finishing work on the “author’s” course in Management in Testing, which in completely different variations will be launched remotely at Software-Testing.ru and in person in Minsk, together with the Iskra training center.
Some parts of the course were delivered as part of corporate trainings in English, for the first time in Russian I will announce part of the course as a 6-hour master class at the SECR conference in Moscow. I hope that informing will finally convince our wonderful QA specialists of the need for "conscious" planned work on the test environment.
- What stages can you divide the creation of a test environment? What do you definitely lay in choosing methods and tools for testing? How long does the setup process take now and how much has it taken in the recent past?
Great questions. One would like to answer in KVN-ovski: “Everything, as always, depends on everything!” :) To reveal these topics, at least a few reports are required, and better, a master class. I do not want to give a simple incorrect answer to a complex and important question. Let's talk them in detail in a separate post, some of the answers can be found in my speeches.
Today I’ll only say the following: ROI, you always need to be guided by ROI in everything, develop your micro Calculator for the specifics of a specific project / task and use it together with stakeholder. Even if in the field of law and moral compensation, mankind could not work out a better universal measure than money, we definitely should not try to invent something of our own. Any decision should be balanced, materially calculated, including the choice of approach to the organization of the test environment, the choice of methods and testing tools.
As for the speed of deployment, it is only increasing, we are faster and faster to solve problems, this is called progress, evolution. It’s just that the tasks are also growing, evolving, so the comparison is not quite correct, today we solve yesterday’s tasks much faster, only today's and tomorrow’s problems are relevant.
- Preparation of configs and test data is the basis, or is each separate case a “blank sheet” that has to be rewritten from scratch?
In my opinion, effective reuse is the basis of any modern IT approaches, from languages and libraries to DevOps solutions. Take the systems of assembly, configuration, meta-descriptions, for example, in the same Docker, they are all hierarchical, they allow you to effectively reuse existing developments, adding only the “cherry” of our own production to the semi-finished layer cake. Any large organization accumulates and, ideally, systematizes the expertise.
Of course, there should be a set of ready-made, company-specific or company-specific domain-specific reference virtual machines, “test templates” of test data, some specialized solutions, architectures for testing automation from the series, if we have Java stack, Web, Angular, an average application size and with large data variability, it’s worth starting, based on decision 25.
In this sense, the ExTENT 2015 conference is very indicative. It was this approach that I, together with my colleagues, tried to implement at the last place of work, organized today in DPI.Solutions and take an active part in organizing within the framework of EPAM. Large companies, in my subjective opinion, outgrew the situation of a lack of developments and ready-made solutions, rather, we see an inversion, the problem is not a lack, but an excess of “expertise from the shelf”, when it is necessary to improve the meta description and ranking system, otherwise in the sea of ready-made opportunities to drown.
- Over the past few years, a layer of ready-to-use solutions for building isolated test environments has been actively developing. How good are they? Is it possible to distinguish among them the most interesting, convenient solutions? How did you manage without it before?
Unfortunately, I will not be able to answer this question in detail, I have not personally been involved in a comparative analysis of such tools. How was it handled before? .. This is a classic evolutionary question. And how did our parents do without washing machines, microwaves? I am not talking about mobile phones, computers, the Internet. Coped! But you get used to the good sooooo fast.
- Do you use Docker technology? In comparison, for example, with OpenVZ - was a step taken towards simplifying use? Tell us about your experience. Before Docker, were there convenient solutions of a similar level, or are they even now? What limitations of Docker do you encounter and how do you solve them?
Yes, of course, Docker is a great tool. Low entry threshold, very convenient to use, with a huge number of ready-made configurations, all this compares favorably with OpenVZ and similar tools. Docker has taken a big step towards simplification.
Convenient UI, a very simple concise set of commands for solving most problems, a meta-description of the “machine” in one file, hierarchy, optimization of traffic when downloading and disk space during storage. Sea of pluses. Of course, there are also disadvantages, there are no drugs without side effects. Not everything is so rosy when trying to seriously scale, the API backward compatibility problems due to rapid growth, as well as a lot of low-level specific problems, for each of which there are some recommendations, but I'm not competent enough to formulate an extensive list of best practices.
- There is a serious problem - these are differences in hardware in test and production environments. There are many questions, both organizational and technical. What solutions do you see with a variety of hardware and software in the test and production environment? How do you feel about the full simulation of running programs in a test environment? Should testing include mandatory emulation of hardware and network infrastructures? What solutions are there in this area, or is it really time now when the hardware architecture in no way affects performance?
In my opinion, the hardware architecture and the differences between the test and real environments are gradually fading into the background. Today, both production and test-env are increasingly located in the clouds, which means that thanks to additional layers of virtualization and services, the problem is leveled.
I will share my personal experience, I regularly took part in performance testing projects; once a detailed description of the testing tasks and the specification of the test environment were part of the contract, the process of choosing the environment itself was a complex, sometimes endless debate, otherwise it was not possible to prove the relevance of the results, to clearly interpret the completeness or incompleteness of the project.
Today, the environment in the clouds is prescribed in the contract, to obtain the final results of performance testing, we provide the results on the specified environment and the trend visualization tool, there are no problems with accepting work. If necessary, the customer runs tests on a new cloud environment and works with the trend. The process of signing a contract and accepting work performed has been greatly simplified. In my opinion, even a private, but very clear illustration.
- Summing up our conversation, please tell me what technologies / products over the past ten years you would call breakthrough in the field of testing and creating a test environment? Is it possible to say unequivocally that the evolution of tools has occurred?
Let me answer a question with a question: Evolution or revolution? Evolution cannot be stopped, progress can be delayed or even forced to retreat along some “fronts”, but never can stop in general! It is physically impossible! Seriously, without lyrical digressions or offensives, we see another round of the dialectical spiral.
The ideas of the 60s in a completely new quality are being implemented here and now. I don’t even know how to characterize such a process, say, “Evolutionary revolution”. Virtualization, containerization, access to the “clouds”, the emergence of both minimalistic and universal automation tools, such as Selenium, and, thanks to the initial flexibility of architectural solutions, spread to “related” areas, such as mobile automation and desktop application automation.
- Thank you for your responses.
You can purchase conference tickets now, registration is open.
In addition to the reports of Igor ( "Autotests: The same, but better" ) and Anton ( "Good" and "bad" options for running Selenium WebDriver tests in parallel ), we recommend that you pay attention to these:
- No Such Thing as Manual Testing and Other Confusions
- Appium: Automation for Apps
- How to teach robots to play games?
- Hero's Journey to Perfect System Tests - Eight Assessment Criteria for Tests' Architecture Design
- Page Objects - Better Less, Better
- Distributed Systems Testing
- Testing Juno Android applications with ️: CI, Unit, Integration, and Functional (UI) tests. 100% Kotlin, 90% + RxJava, Spek, JUnit, DSL for UI tests
- Combining manual and automated testing: process and tools
- Shopping List: Things to Remember When Running JMeter Tests
- Static brain removal: what do code analyzers hide?