Automation including AI is just a tool. And tools are only for their use; they won’t replace us
Lately, I hear this sentence quite often. And I am asking myself are we seeing the same thing?
Disclaimer: This article is no scientific distortion. Although it is quite well researched, it is my opinion on one of the possible outcomes of AI. The subjects I tackle are not set in stone, but they are probable, and it is all that you need to start worrying about.
Lately, we hear a lot of about AI, next to bots and blockchain it is one of the latest hypes. We see people promising unfathomable things. And then people completely dismiss its value.
The problem is that the most prominent criticism and most significant praise often comes from people who don’t know what AI even is. I am not AI researcher; I am not working in data science. But It is one of the subjects close to my heart.
And in short. My opinion is: You should be worried.
Let’s start with the terms.
Right now there are a lot of issues with the terminology in the testing world. Job Coantionio summed up in „Testtalk #194” that when he and other testers were discussing AI they could agree only on one fact: when they say AI, they mean “something using Statistics”. Outside of this point, each one of them saw AI completely different.
But in Science world AI like this is called a “Weak AI” or “Narrow Purpose AI”. Yes, they are a significant step forward. But it is just the step, not the end of the journey. Machine and Deep Learning allowed us to move from AI being “a bunch of if statements” to being something more autonomous. But the world is already trying to move past those. Research about “Contextual Learning,” i.e. a Weak AI that doesn’t need tons of data to learn is already in progress.
When the scientific world is talking about the risk of AI, it is usually talking about “Strong AI” or “Artificial Generic Intelligence (AGI)”. In simple terms: AI presenting human-level mental abilities, not in one specific task but in all of them. There is also the term “Superintelligence“ meaning „Generic intelligence far surpassing human level” And it won’t be long for AI to go from AGI to Superintelligence.
Life 3.0: Being Human in the Age of Artificial Intelligence is tackling lots of ethical dilemmas connected to AI and the threat it can bring to humankind. Here I will only focus on testing.
So why should you be worried about AI?
Let me explain you one thing: Strong Level AI is not science fiction. Scientists can’t agree when it will happen, some estimate in few years, others go as far as 1000 years.
Right now people argue that AI can’t do „creative jobs”. Yes, it can do the simple tasks even better than us, but, it is not able to deal with creative ones. That argument won’t work in the case of AGI. There is an argument to be made if machines will have true creativity. But they will be creative enough to do most jobs.
Of course, when it comes available, it will be costly to use. But, then again, humans are not cheap either. Plus we are unreliable: holidays, sick leave, accidents, and typical human mistakes. Have you heard about biases? A year or two ago they were all the rage in the testing world. In short, we tend to make shortcuts in our thinking that can lead as to wrong results. It is the fault of both our “software” and our “hardware”. We can operate at our peak abilities only for a short while.
AI will break through humans body the limits.
Now AGI won’t have that problem, as long as we can provide sufficient heatsinks it will be able to operate at 100% all the time.
It won’t need a coffee break, or only 8-hour shift, or any human rights at all.
And the cost will go down as time goes on. That is basically the rule: look at the price of the first personal computers and their computing powers and compare them to your computer or smartphone. There are so much cheaper and so much more powerful. It all happened in less than 50 years.
Are you still not convinced? Thinking I have read too much sci-fi?
Well, lots of big minds don’t agree with you:
Stephen Hawking, Neil deGrasse Tyson, Elon Musk, Sam Harris, Richard Dawkins
I will use a quote from the last one of them:
There is nothing in our brain that isn’t governed by laws of physics, and so in principle, it must be possible to simulate everything that the brain does in silicon. And so in principle, it must be possible to make robots that are capable of doing everything that humans can do.
But let’s be honest: this is still in future. How far in the future? I can’t say. Max Tegmark and Paweł Noga would advise you against suggesting your children a career in IT.
Pff it is still far away, why should I worry?
Well, because we don’t need AGI to find a tool for stealing your job.
Remember we use tools to make our task simpler or faster (or the impossible work possible).
Machine Learning is already showing that it is capable of doing the simple tasks that take most of Enterprise Lawyers’ workday.
There are already Machine Learning translation apps that have quite good results.
Even on our field we already see some tools trying to do it, for example, Mabl, and of course, the Context-driven community is already going against it calling it „The Omniscient Tool Scam.”
They try to compare Mabl to „Real smart QA engineer”. You know, I haven’t played enough with Mabl to say what it can or cannot do.
Look at the critical words „real smart”. Let say that Mabl can provide 50% of what a „dumb tester” would be able to do. You know what? Its price is still a lot lower than that dumb tester.
How many testers are „really smart”?
I am going to avoid defining that term. For the sake of a number, I am going to go with Pareto 20-80. I am going to say that another 20 is dumb testers. The rest are average testers (I would classify myself as part of this group)
I know little about Mabl, it is on my to research list. But I haven’t gotten time to play with it. I am going to reserve my opinion until I play with it or I see an actual review of someone who used it.
Ubisoft uses AI to find bugs fast; MS is using for a long time formal methods to test drivers. Static code analysis is getting better and better. Tools are getting better.
Tools are allowing us to do a better job and faster.
But do you remember the milkman? There is a tool called the fridge. It killed that profession. The alarm clock has killed knocker-uppers. Sure all of those have opened new jobs too but:
Those are not the same job as the one deprecated. Usually, they require more specific know-how.
But those weren’t creative jobs; those were a simple repetitive task!
You are right, but you know what a creative job is?
Sales! And somehow now KFC and McDonalds have automated sales system. And only one actual person taking orders.
I don’t know the numbers, but I would assume the real person can sell more. But the machine can sell faster.
It is true that tools won’t replace „real smart Testers”, those guys will be working on the project that requires peak quality. But for projects and products that are satisfied with just „Shippable Quality„, well, I see future where tools make it so simple that even a PM and Devs will able to handle this.
The writing and running of tests is not a goal in and of itself — EVER. We do it to get some benefit for our team, or the business. If you can find a more efficient way to get those benefits that work for your team & larger org, you should absolutely take it. Because your competitor probably is. – via Sarah Mai
Remember that testing is a supporting role, we are there to help. There is someone who has a vision of the project. He doesn’t know how to create it, so he hires developers. They don’t understand how to check if the quality is good, so the owner hires testers.
But if the owner or the developers know how to get quality to an acceptable level, he won’t need us. Of course, the next step is that the owner won’t need developers either.
Time to sum it up
There is an old saying: Admitting the problem is the first step in fixing it.
We may refuse that there is a problem and still do stuff our way. But we will just be ignored, and they will proceed without us. As It was with Agile, as it was with DevOps so that it will be with AI.
I don’t have the solution to this problem. I am not even sure that this is a real problem. After all the modern testing concept sees testing evolving but it does not see a modern tester role.
There is only one question on my mind. If we can’t make 100% bug-free software how can we make 100% bug-free AI? And what will be the consequence of those bugs?
You can’t say AI and Machines are coming. Cause they already came.
Elon Musk may have admitted that level of automation on Tesla production line was a mistake. But Fact that he was able to do it in the first place speaks volumes. Lely is doing automation in farm equipment for a few decades now.
Technology comes in increment there is no Big Bank thing.
And What do you think? Are you worried about AI? Or there is no threat?
If you want to read more my articles about science check here.
Few Aditional resources:
There is one huge faux pas I did I completely forgot to Thank you, to people, who helped me with proofreading this article:
Thank you, Noemi and Bob (from MoT Bloggers Club) For help!