Skip to content

Artificial Intelligence: Changing How We Do What We Do

By Tommy Carver

AI is probably the hottest and most thrown around buzz word currently. It has quickly transformed from theory in academic research to the most in-demand marketing tool. It seems like every website sprinkles the words “artificial intelligence” and “machine learning” all throughout their marketing pages just to grab attention. But, what really is AI, how can we use it, and where is it leading?

As clear by its name, AI is a form of creating something artificially ‘intelligent’, particularly through means of programming. Through different applications of math, statistics, and computer science, it is a way of teaching a computer to solve some problem through an underlying understanding of the situation. In many cases, they are created for exactly one purpose, meant to solve the problem of the context they are in. For example, a chess AI taught to ‘solve’ the game of chess by efficiently running all possible moves that could be made and determining a ‘best’ move. While this chess AI has the capability to swiftly defeat professionals who’ve played the game their whole life, in any other situation it is just as clueless as a Roomba searching for dirt. However, with more modern techniques that are designed to mirror the human brain, some programs are becoming increasingly able to handle a variety of scenarios.

There are 3 main ways that computers ‘learn’. The first of which, and the most common, is supervised learning. This technique involves feeding an algorithm some piece of data, alongside a label that attaches it to a class. Think of giving the algorithm a photo of a dog and the label “dog.”. By immediately determining if the prediction made is correct or incorrect, the AI can adjust its ‘brain’ to correct for its mistake and ‘learn’. The second form is unsupervised learning which is more complex and involves a serious understanding of the task meant to be solved. Instead of telling the AI what is right and what is wrong, unsupervised learning requires teaching the AI a fundamental understanding of the problem, so it can learn to make decisions on its own, such as with the chess engine and assigning values to specific pieces. Oftentimes, both of these techniques are applied to create a fantastic and efficient duo. Lastly, reinforcement learning is similar to supervised learning in that it is told which things are right/wrong, but different in that it is often delayed (told the result after it took the action) or hidden in the environment (certain areas are better than others). Each form has its benefits and issues and could very well be obsolete soon as new forms emerge.

Now, you may be wondering why the field has grown so rapidly in the last few years as opposed to the years before collectively. The most important answer to that is the rise of big data. The collection of vast amounts of data can finally feed the algorithms more accurately and efficiently. Nowadays, almost all information is stored in databases or some sort of tech-friendly format. Being able to access years upon years worth of data and feed it to these always hungry data monsters that are AI has greatly increased the effectiveness and capability of these technologies. Complementary to the data, the rise of labeling has also proved extremely useful for AI. Nowadays, billion-dollar companies have risen up on the premise of using AI to label data for the world to use. Unicorn companies like Scale and Labelbox have truly shined and unleashed a lot of the potential AI was believed to have had. Having not only the data but massive amounts of labels for this data has allowed us to learn from more data than ever believed. Lastly, the significant increases in computing power have contributed to the rise of these powerful tools. With any laptop being able to run a simple decision-making AI or even the accessibility of cloud computing, there is very little stopping many developers from creating massive frameworks. NVIDIA and Intel have each committed to the design of Artificial Intelligence chips that outperform most modern computers. With the rise of big data, labeling, and massive computing accessibility, AI has truly shone during its time in the spotlight.

The most controversial and heavily talked about idea regarding computer intelligence is: is it good and should/how do we use it? I’ll try to address both sides of the argument, starting with the benefits, but I highly recommend looking into this on your own as well. Firstly, the capabilities are nearly endless. Artificial Intelligence has found ground in nearly every industry in some form. It spans across the agricultural field all the way to energy, financial services, and even manufacturing. Next, AI allows for entirely new perspectives that humans have yet to understand. Allowing a new set of experiences to approach the same problem humans have been trying to solve for years has proven successful. At the same time, it has the capability of consuming and learning from multiple lifetimes of human data, transcending many of the limitations that humans may have. They can face millions of examples of successful trades in the time it takes one human to understand one successful trade. Since computers can perform tasks much faster than a human can, it allows us to approach the much more difficult problems, with the assistance of these tools to do all the brunt work, data aggregation, and analysis. 

Obviously, not everything is good. As the most common argument goes: computerization is automating and taking jobs from a lot of people. By taking over many industries and fields, many people are finding their jobs replaced by robots and other automated solutions. In addition to this, there still exists a large policy vacuum in regards to the regulation of AI. Companies are left to make these decisions for themselves for the most part, as can be seen by IBM’s decision to not research/use facial-recognition software anymore. While the policy is catching up, plenty of ethically murky situations remain. In a similar light as this, lots of training data are sourced from unaware individuals. Most of the time, consent of data use is given through these extremely long and complex terms of service agreements, but we all know no one reads this. In Europe, they recently implemented GDPR, which allows for people to have much more control over their personal data. In the US, however, it is still hard to understand how and who uses your data. Lastly, and most problematic, is the implicit bias in the training of these tools. Most of the training data used fail to accurately represent or account for underrepresented groups of people. There have been large issues with racial profiling in facial recognition systems which can even fail to recognize specific groups of people. Until humans can find a way to fairly train these algorithms for all people, it can serve as a risk to those who are unfairly represented and can fall victim to the training bias. 

AI is obviously one of the most hyped up and talked about things in recent memory. But, it is definitely a more approachable topic than the surface makes it seem. While it can feel far out, it is very rewarding and exciting once you dip your feet in. 

Tommy Carver

Tommy is a junior in Computer Science and is a member of T&M Class XXVI. Next Summer, he will be working as a Forward Deployed Engineer at C3.AI.

Leave a Reply

Your email address will not be published. Required fields are marked *