Written by Trey Edgington
Reviewed by Hinrich Eylers, PhD, PE, MBA, Vice Provost for Academic Operations and Doctoral Studies.
Since OpenAI launched ChatGPT in late 2022, the topic of AI has sparked heated debates in business, government, education and other fields. People are wondering, “Is AI good or bad?” The answer? It’s complicated.
When asked how they felt about the advances in AI, 54% of Americans said they were cautious, 22% said they were scared and 19% said they were excited.
As with any new technology, particularly one as transformative as artificial intelligence, it’s crucial to examine all aspects of it before making a decision about its possible benefit or detriment to society.
Even people with computer science or mathematics degrees may find AI difficult to understand — no less define — and those of us who are merely tech savvy (or not) may find it impossible to understand the nitty gritty of an AI model.
The good news is, we don’t have to. Simply put, artificial intelligence is a broad field of computer science that develops software, systems and machines that are able to complete tasks that, until recently, could only be accomplished through human intelligence.
Though there has been quite a stir about AI since late 2022, it’s actually been around much longer. Examples of AI include:
On a broader basis, many like to think of AI in two ways, narrow AI and artificial general intelligence. In this construct, narrow AI specializes in a dedicated area, such as analyzing X-rays or MRI images. Artificial general intelligence, on the other hand, refers to acquiring and synthesizing knowledge from a variety of sources to provide a more humanlike output on a given topic.
AI can benefit society, governments and business alike, and because the technology is so new, there are likely more positive use cases to come.
By optimizing processes with AI, an organization can reduce the investment of time and resources they’d otherwise apply to human employees who’d need to complete certain tasks. The ways AI helps range from data analysis to content generation. Of course, human beings still need to review, fact check and sometimes edit AI-completed tasks. It’s also important to think about the ethical considerations of using AI at work.
After an AI model has analyzed and interpreted a large dataset for a specific task at a specific business, stakeholders can use that data to make better decisions. Not only will they have a more complete picture to base their decisions on, but they will have the AI’s predictions. The improved decision-making process will allow the business or organization to potentially thrive where it may not have previously.
In the past, people had to perform boring, repetitive tasks like data entry and email management. Now AI can perform these tasks, and organizations can use humans to do other important tasks that AI cannot perform. AI can reduce human errors, especially decisions where emotions or personal opinions or biases could affect the accuracy of the decision.
AI helps researchers as well. Human researchers spend a significant amount of time collecting and analyzing data. Now, AI can expedite the process for many by finding appropriate literature from a global database, a task that would be almost impossible for humans. AI can also help researchers review and summarize the literature the AI model has found. AI can also aid researchers by using ample data sets to make informed and potentially more accurate predictions for proposed experiments or tests.
Is AI bad? Though there are many benefits to AI, there are also drawbacks. Before deciding whether artificial intelligence is bad, we should explore further.
Leveraging AI for repetitive tasks may save an organization time and money, but not necessarily jobs. Roles like manufacturing, data entry and retail checkout may be sacrificed to AI as it becomes more sophisticated and ubiquitous. In fact, Goldman Sachs forecasted in March 2023 that AI could impact 300 million full-time jobs globally, according to CBS News.
Because AI is programmed by humans, some human biases will end up in the code. The training data also be biased. The ramifications of this can ripple through society at a rate that is hard to fathom. That’s why addressing biases is so important, even if they were made in good faith and are unintentional.
Security risks may represent the biggest drawback for AI. It can impact people on a personal level (such as stealing an identity). It can impact people on an organizational and professional level (such as data theft or ransomware that can shut down operations entirely). It could impact people on a community level if, for example, a bad actor took down an electrical grid in a city or town. It can even impact people on a national level as with the theft of sensitive governmental information.
Deep fakes are audio and video files that have been altered to look and sound like a specific influential person, such as a political figure or businessperson. In a video file, for example, a U.S. president may appear to say something against his platform or much worse. Videos like this could affect the outcomes of elections while also being difficult to disprove. Similarly, deep fakes could affect businesses and other organizations with negative perceptions, boycotts and other detrimental action.
In the academic sphere, plagiarism represents another liability for AI. In fact, when ChatGPT and other LLMs became widely available, one of the first concerns was plagiarism. With a few easy-to-write prompts, students can access entire essays. Early on, it was nearly impossible to check for plagiarism, because each output — no matter the input — was completely different. Now AI plagiarism checkers help mitigate that problem, but they’re not perfect.
There are many ways to create misinformation with AI, but there are also ways to spot it. Often, it starts with a gut-check. Ask yourself if the message lines up with the person in the video. Check to see if the mouth is synced with the audio. Look at the details of still images closely. Do the hands have five fingers? Do the shadows appear as they should? But remember, AI technology improves daily, so it won’t always be as easy as spotting a three-legged dog.
Like tools to detect plagiarism, there are also tools to check videos for authenticity. These AI checkers may use pixel analysis or other means to find out the truth.
There are a few ways to protect yourself from AI risks.
Choose your AI apps carefully. Unethical programmers have created harmful AI apps due to the high demand. When downloaded, these apps have the ability to steal your data, so do your research before downloading.
Don’t put any personal information into an AI chatbot or LLM like ChatGPT. Any information you provide is stored on their servers for several days. Hackers are now learning to hack those servers to mine them for personal information, including bank account numbers, contact information and more. Before using an AI program, turn off the “save chat” feature.
Finally, take common cybersecurity measures, such as creating strong, unique passwords. Put your devices on their highest security settings. Make sure you’re using the most recently updated antivirus software.
Artificial intelligence isn’t just for businesses, schools and governments; we use it in our everyday lives as well.
One of the most significant and positive uses of AI is in healthcare. From new medicines to diagnostics, AI is already making a serious impact on healthcare, improving patient outcomes and increasing efficiency. Here are a few of the improvements AI is making in healthcare:
AI in transportation goes far beyond self-driving cars. AI-enabled chatbots can help travelers book reservations and do research about their destinations. In-room assistants, similar to Alexa or Google Home, can change the room temperature, answer questions about local attractions or give guests information about operating hours of the hotel restaurants. Some hotels even allow guests to check in via face recognition.
The benefits of AI for logistics impact manufacturers, transportation services and consumers alike. Not only do we want our goods delivered on time, but there are some items — medications and food, for example — that we need to be on time. AI can improve logistics processes in many ways including:
When we think of AI in customer service, we think of chatbots. Generative AI systems, like LLMs, can now provide tailored answers based on company and user data as well as previous interactions. Natural language processing helps the chatbot better ascertain what customers actually mean rather than providing a set of outputs based on the inputs.
Just like business and healthcare, AI can have a huge impact on education. In fact, it already is improving life for students and teachers alike.
AI can help instructors by streamlining curriculum planning using evidence-based pedagogical methods without the time-consuming research. Teachers can also use AI to tailor lessons to students with different needs.
Digital textbooks equipped with AI can give each student what they need when they need it. For students who may find a particular subject challenging, the AI textbook can create a more personalized learning program, allowing those students to work at their own pace.
Similar to AI-enabled textbooks, using ChatGPT or other LLMs as AI tutors can also improve comprehension and grades. AI tutors — the LLM of your choice — are available 24/7, and they are often free. Virtual AI tutors also have fewer, if any, knowledge gaps compared to their human counterparts. Just remember to check any information given by AI, as those systems are known to hallucinate (give incorrect information) from time to time.
Without oversimplifying the issue, the future of AI is uncertain. The technology will continue to advance, and in a manner that is somewhat unpredictable.
Though we don’t know exactly where AI will go, there are some trends to watch:
As discussed, there are many ways AI can go awry. With so much potential, both good and bad, it’s crucial that developers start to put in guardrails to keep AI safe for everyone.
Transparency and accountability are key to ensuring AI that is safe and ethical. Some AI systems work in a “black box” meaning no one knows exactly how its decisions are made. With such as system, no one can be held accountable, which has potential to cause significant repercussions in terms of safety, legal concerns.
Another concern surrounds intellectual property, which has been debated since ChatGPT’s release. Because most of the training data was scraped from the internet, very few creators were compensated for their intellectual property. Many lawsuits have been filed and the courts have come to some conclusions, but nothing that solves the problem entirely. There is a lot to consider when writing these laws, so it may be a while before we have a solution that is fair to everyone involved.
We know we can use AI in a way that will greatly benefit ourselves, businesses, governments and society. But as a powerful tool, it can also be used for nefarious purposes, such as bad actors stealing information, sowing discontent and perpetrating other illegal activities.
So, is AI good or bad? Even with all the information, it’s still a tough call and one that we have to make as individuals, at least for now.
Curious to learn more about AI and its applications? Check out University of Phoenix’s Center for Educational and Instructional Research Technology in the University’s College of Doctoral Studies, where research projects, whitepapers, blogs, and even dissertations may be available to shed light on this cutting-edge topic.
To learn more about how artificial intelligence works, check out the online technology degrees at the University of Phoenix, where you can deepen your understanding of AI in an online learning environment as part of the curriculum:
Ready to take the next step? Request more information today and prepare for your future.
While I understand the need for plagiarism detection tools in academia, the focus of higher education institutions (HSIs) should be to teach students the importance of academic integrity, personal development and growth, developing skills, and the responsible and ethical use of any and all technologies.
J.L. Graff
Associate dean in the College of Business and Information Technology at University of Phoenix
Trey Edgington holds a Master of Arts in creative writing from the University of North Texas, and his short fiction has been published in several literary journals. His professional journey also includes more than 15 years of experience in higher education and healthcare marketing. Over the course of his career, he has held such roles as adjunct instructor of English, senior content editor & writer, and content and SEO manager. Most recently, he has taken on the role of generative AI language consultant.
Dr. Eylers is the University of Phoenix vice provost for Academic Operations and Doctoral Studies. Prior to joining the University in 2009, Dr. Eylers spent 15 years in environmental engineering consulting, sustainability consulting, teaching and business and technology program management. He was amongst the first to be licensed as a professional environmental engineer in Arizona.
This article has been vetted by University of Phoenix's editorial advisory committee.
Read more about our editorial process.
Read more articles like this: