AI is learning new things but still can’t understand them

November 22, 2018 |

greater than 10 minutes

AI is one acronym that appears quite often these days. A friend or colleague might have told you something about Artificial Intelligence. Perhaps, you might have come across the term on the internet.

Artificial Intelligence is more profound than what is often presented, though. You are about to find out what we mean here.

Digital assistants like Siri (active in devices manufactured by Apple) and Cortana (in Windows computers) provide accessible platforms and abilities users commonly associate with artificial intelligence. The autocorrect functionality and the predictive text input feature serve as other examples.

However, it is imperative you understand that the examples we listed hardly define the general-purpose artificial intelligence. If you look deep enough, you will realize that those features or functionalities are there to help users perform specific tasks or assist them in engaging in defined activities.

Perhaps, things would be a lot easier if tech developers and manufacturers used accurate wordings to describe and market their products. They cannot do without the hype, though.

What can AI do?

Consider an event where a major tech firm states that it is planning to release a new AI feature. In most cases, that company is more or less building a neural network, using machine learning. Machine learning typically comprises of series of operations or activities that enable a computer to get better at performing a specific task or job.

Here is the problem: Machine learning, even if we were to consider it at its peak, is quite limited when compared to artificial intelligence. We are not having a go at machine learning here. Of course, we know too well about the excellent capabilities it provides.

In other words, machine learning is a fantastic technology in its own right. Nevertheless, the general-purpose artificial intelligence we described earlier encompasses a lot more than just machine learning. As a result of this (since most AI tech is based on machine learning), our current artificial intelligence technologies are more limited than what they ought to be.

If you often watch sci-fi films, then you might have an idea of what we are talking about here. Perhaps, in those movies, there are scenes involving a computerized or advanced robotic brain capable of thinking independently and understanding things like humans often do. We can class such futuristic technology as artificial general intelligence.

Programs or machines that depend on artificial general intelligence technologies will be able to think about different things. Furthermore, they will end up being able to apply the intelligence from their thoughts to different domains.

Strong AI, for example, is a concept that describes the state of a machine being able to experience the consciousness associated with human beings.

Unfortunately, as much as it pains us to say it, we are yet to possess such level of technology. In fact, we will not achieve that level of AI technology anytime soon. Siri or Cortana might have impressed you with the work they did, but in general, your personal assistants cannot think or act as you do. We can safely claim they do not really understand you. To be fair, they hardly understand things at all.

Unsurprisingly, such personal assistants are known to underperform when users provide them with a limited amount of data about themselves. After all, Apple and Microsoft train Siri and Cortana to do specific tasks very well based on the information they can access.

The critical thing to note here is that those personal assistants might learn to do something, but invariably, they still lack the understanding of what they do in the real sense of it.

In general, we managed to create artificial intelligence assistants, and train them to perform specific tasks very well. Nonetheless, how well they get the job done is somehow dependent on the volume of data humans feed to them to help them learn and improve on what they already know. It is not good enough.

Can computers think?

The answer to this question depends on how we choose to define the word “think”. Let us first consider the definition of the word in the strictest form of it.

Computers do not think. They simply or mindlessly (for the lack of a better word) follow the rules that have been set by the humans that created them. Computers do precisely what is asked of them regardless of the nature of the task, the risks involved, moralities in question, and so on. Surely, you can see why they are so useful.

A computer program, therefore, is basically a written set of rules or instructions. Machines running the program strictly follow the code; they never stray from it, and this more or less means they can never conceive an original thought of their own.

Computers and programs define the realm or limits of the artificial intelligence we are familiar with currently. We are working on artificial intelligence where programmers write some rules for a machine to follow to make it appear as if the device is thinking.

For example, if you write a rule for a program to think about one thing, will the computer-generated or machine-instructed thoughts adhere to the definition of “thinking”?

Computers can learn, though, and humans have successfully exploited this one ability. The machines that play games better than humans have probably learned so much, or they have access to more information than humans can perhaps learn to recall.

If we were to consider the definition of the word “think” based on whether a human recognizes that a person or thing is thinking, then perhaps, some computers can think. However, we consider this standard quite superficial.

Here, if a human cannot tell if he/she is talking to another individual, then the machine he/she is talking to is probably thinking. This condition more or less represents the basis of the Turing test. As far as we know, some computers have passed this test.

Nevertheless, it will be a stretch too far to consider those computers (which passed the test) as devices capable of thinking as humans do. We have a solid argument here. For one, machines are far from being capable of replacing humans in things they do.

Computers still struggle to invent things. They cannot compose good music. By now, you should know what we mean here. Computers are not capable of doings intrinsically associated with the fundamental thoughts of the human race. Perhaps, if they really could think (based on the first definition we gave), they would be a lot more capable than they are now.

The inability of computers to think does not necessarily rule them out as being useful devices to make excellent decisions. Of course, all we need are a specific set of accurate and carefully written rules for the computers to act on. The good thing is computers are better at following instructions than humans; they are faster in what they do, and so on.

Moreover, limits do, and must, exist for obvious reasons. When you set a timer on your microwave to heat your food, the device does precisely what you ask of it (based on some already programmed rules coupled with the specific instructions you gave to it).

There are times or scenarios, however, where all the programming, coding or instructions are simply not enough to get the job done correctly.  Well, bad things tend to happen — for example, the accidents with self-driving cars today.

Surely, someday, humans will make great strides in artificial intelligence, machine learning, programming, sensors, and so on to eventually make computers drive cars excellently. After all, if they ultimately get everything right, computers as drivers will be inherently preferable to humans since human errors and related risks will cease to exist.

Pros and Cons of Artificial Intelligence

Here are some of the advantages or upsides associated with the use and implementation of artificial intelligence:

  1. Dealing with boring or difficult challenges becomes easier:

Humans tire; they get bored too. Machines do not. Artificial intelligence enables us to use computers to carry out incredibly tricky or tedious tasks. Furthermore, automation (which is usually employed to execute operations) brings about an increase in efficiency and productivity.

In theory, this way, humans can abstain from working on mundane or complicated tasks, and subsequently, they are free to become more creative than ever.

  1. Error-free decisions and faster actions or processes:

Time is an important variable, and unsurprisingly, we often strive to use as little as possible of it. With artificial intelligence and related cognitive technologies, we can make faster decisions without compromising on accuracy and other relevant standards.

To err is human. Computers do not make errors (at least, given the right conditions). The only mistakes that occur are a result of incorrect or inadequate programming. If AI is used to process data, the chances of errors occurring are minute or even non-existent. Judgment calls are a different thing, though.

  1. Benefits of Machine Learning:

We earlier discussed machine learning concerning its association with artificial intelligence. Now, we are going to define it in terms of big data (which means datasets in petabytes). Of course, it is impossible for humans to sift through such quantities of data. Fortunately, artificial intelligence provides a way for us to use or take advantage of big data regardless of the form it is in.

AI can go through a large amount of data as fast the processor employed is capable of, and at the same time, it can derive insights considerably better and far quicker than any human could. Big data processing and analysis, in the real sense of it, make up a minute portion of the capabilities of artificial intelligence. Hence, machine learning is the much more accurate term here.

  1. Risk-taking:

Machines powered by artificial intelligence can do the jobs humans refuse to do. They can execute the tasks the average individual would have to carry out very carefully. With artificial intelligence, the risks you get exposed to in the name of research lessen significantly.

Consider space exploration, for example. Robots can travel close to the sun in scorching temperatures to gather data. Humans send rovers to Mars to study the landscape, explore the planet, determine the best paths to take and check for the dangers that lie ahead. These examples are but a little proof of the endless benefits and possibilities that come with AI taking risks on behalf of humans.

On the other hand, these downsides or disadvantages do exist too:

  1. Loss of jobs and wealth:

This event is one unfortunate outcome associated with the use of artificial intelligence and related technologies. A good amount of lesser-skilled people will lose their jobs due to AI. To be fair, robots have already eliminated a considerable amount of positions on the assembly line, but strong artificial intelligence will take things several levels up. Job losses will hit humans harder than ever.

Consider the self-driving car concept, for example. If (and when) it eventually succeeds, millions of human drivers will lose their jobs. From the regular taxi drivers to the well-experienced chauffeurs. They will all be affected somehow.

We know of the argument where it is stated that AI will create more wealth than it destroys. We strongly disagree. If there is anything we have learned in the last couple of years with the advancement of IT, it is that the newly created wealth will end in the hands of a few people, and this can only be a bad thing.

  1. Dangerous concentration of power:

If things continue to progress the same way, AI technologies will eventually be concentrated in the hands of a few people, organizations or nations. These bodies will have great or immense power. What they do with it is of great concern (as it should be).

With AI, there is always the risk that once you take away the control from humans, people will get killed indiscriminately (since humans do not have to pull the trigger). The burden or morality and all other things that prevent people from making inhumane decisions will be non-existent.

Considering the wars humans have waged against one another with respect to civilizations rising and falling, improvements in AI might usher in an era more dreadful than anything we have ever experienced.

  1. Poor judgment calls or the inherent inability to make calls:

Artificial intelligence, even with all the super abilities or features, is incapable of making a judgment call. Personally, we believe AI might never get the ability to make the right judgment. Humans, as we all know, tend to consider unique or exceptional circumstances when making judgment calls. Artificial intelligence cannot do this, and we do not see things changing anytime soon.

Perhaps, this is why some people believe there is nothing artificial about intelligence. They might be right. Intelligence is more or less a fine balance of emotion and skill (this combination is in constant development and evolution). Furthermore, our behavior is a function of the world around us. Things do not always come in black or white or appear in specific, known or understandable variables.

Hence, when we make judgments sometimes, shades of gray come into existence. Invariably, the more artificial our behavior becomes, the more our definitions of things and events try to come down to simply what is right or wrong instead of the swift mid-course corrections and interchanges that perhaps make us humans and intelligent beings at the same time.

AI undeniably has great potential. Humans, however, get to decide if it becomes an overwhelming force for good or evil. Individuals, governments, and organizations can choose to use their own judgment to apply artificial intelligence productively, but we are not holding out our breath on this matter.

In other words, we can only hope that the rise of AI does not get out of hands. Perhaps, humans should not be the ones who get to decide on the use of artificial intelligence. Unfortunately, AI is not yet smart enough to do that for us, and this is the ultimate advantage and downside with the use of artificial intelligence.

TIP:

Since our discussions on AI have been of great concern to you, there is a good chance you may be interested in checking out Auslogics BoostSpeed. If your PC is struggling with slow-downs or performance downgrades, then this excellent program will do an incredible job to help your computer get back to its best.

Share it:
Do you like this post?
1 Star2 Stars3 Stars4 Stars5 Stars (1 votes, average: 5.00 out of 5)
Loading...