This article was updated on December 1, 2023.
Written by Robert Strohmeyer
Reviewed by Jessica Roper, MBA, director of Career Services at University of Phoenix
As 2023 began, many of us found ourselves working alongside a new kind of colleague: artificial intelligence (AI).
While not exactly new, AI has broken out in the past year as an overt force in the workplace to automate simple tasks and help optimize overall efficiency in a variety of industries. Most notably, OpenAI’s ChatGPT had everyone talking when a free preview was released in December 2022.
Many more artificial intelligence tools are available today besides ChatGPT, which is a chatbot linked to a language model called generative pretrained transformer, and the months ahead are certain to see the release of still more besides.
While it can be tempting to dive headlong into the brave new world of automation that artificial intelligence offers, the technology is still new and fraught with both flaws and unanswered questions. In the past few months, we’ve seen numerous examples of artificial intelligence creating problems for its creators and users at work, including lawsuits over copyright infringement and insensitive use in sensitive communications. And Google’s Bard AI made a costly flub during an early public demo.
So, it’s important to consider the risks and benefits of this emerging tech and apply it wisely at work. Here’s what you should know before using AI-powered tools and systems at your job.
Artificial intelligence, simply put, is a combination of computer science and large data sets capable of making decisions based on algorithms.
While the field of AI implicitly includes any program that simulates human thought, the term today refers mostly to algorithmic learning, where programs or robotics can analyze data, make decisions and create output without being explicitly told how to make each decision. The methods by which these algorithms autonomously analyze data (text, images, numbers) and discern trends or patterns within them to make decisions are collectively known as machine learning (ML).
Machine learning algorithms have been gaining in sophistication for several decades now. They became common in big-data business applications since at least the dawn of the social media era, particularly in AI-powered marketing applications, such as advertising and content personalization, where complex networks make split-second decisions about which content to load for a given user based on available data about the individual.
If you’ve ever been creeped out by the uncanny coincidence of content in your social media feeds displaying ads for things you’ve just been looking at or talking about, that’s machine learning at work — and it’s been going on for years. The last three tech companies on my resumé were focused primarily on machine learning-driven marketing applications like these, and their use in the broader business market has only grown in recent years.
Up to this year, most AI applications required a lot of technical effort to use, with specialists in data science and various other tech fields having to run complex queries against the machine learning models to get usable output.
What’s new is the emergence of relatively intuitive interfaces for ordinary people to ask robotics to perform useful tasks based on what they know from their large data sets. ChatGPT can generate text content in response to typed prompts. Other tools, such as DALL-E2, can generate custom graphics based on typed input.
So, instead of needing someone with a PhD in data science to run queries for you, you can now type in what you want using plain English (or a variety of other languages) and get something useful back, at least some of the time.
The general term for the type of artificial intelligence that ChatGPT and DALL-E2 belong to is generative AI, so called because these tools don’t just parse data, they actually generate output in novel ways. ChatGPT doesn’t just spit out information it has collected in its databases; it generates new text algorithmically, using the data it has collected and increasingly sophisticated methods of writing content.
This means you can, for instance, give it your grocery list and ask it to render that list as a Shakespearean sonnet. DALL-E2 doesn’t just put existing pictures from its database into an image; it generates a new image from scratch based on your request. So if you ask it for a picture of Albert Einstein playing a guitar while riding a unicorn, it’ll figure out a way to create that picture, pixel by pixel. (I’ve tried both these examples personally, and the results are entertaining.)
We’re currently in the earliest days of user-directed artificial intelligence in the workplace, and the possible AI-powered applications for this technology are still coming to light. What’s already clear is that there are good uses and not-so-good uses for these tools.
While it can be tempting to think of these apps as a cheap replacement for human labor, or perhaps a secret weapon to automate tasks to get your own job done effortlessly, the reality is that current technology isn’t really robust enough to fully free us from work.
Even if an AI tool like ChatGPT explicitly transfers to you commercial use rights for its output (which it does), there are risks associated with using the output of these apps as is. Some of those risks are stated clearly on the ChatGPT website: “While we have safeguards in place, the system may occasionally generate incorrect or misleading information and produce offensive or biased content. It is not intended to give advice.”
That’s a legalistic way of saying, take what you get with a grain of salt.
Further, there are additional intellectual property issues to consider in the commercial use of any content. Without knowing where the artificial intelligence got its underlying data, it’s difficult to know whether the bot is copying text outright or creating something original.
Similarly, just because it’s easy to generate an image of Albert Einstein playing guitar on the back of a unicorn doesn’t mean it’s fair game to use Albert Einstein’s likeness commercially. While there has been too little legal precedent to be certain where liability falls when it comes to using AI tools, precedent from recent cases should give us pause as companies are being held accountable for the output of online tools generally, and this will likely apply to AI tools as well.
Leveraged wisely for the right tasks, generative artificial intelligence can automate tasks to make complex jobs easier, help you optimize difficult information problems more easily, and help you to be more creative at work.
With all this in mind, here are a few important tips to keep yourself and your business on the right track with AI applications at work:
ChatGPT can help you cut through complex research tasks by analyzing the vast sea of data and generating useful synopses of otherwise complicated topics. By asking a series of probing questions about an area of interest, you can quickly give yourself a boost by letting the bot tell you the major issues to look for.
From there, though, you should leave the bot behind and do your own research, since AI is heavily influenced by the data it has taken in and by its algorithms. It won’t be aware of any new analysis from the past couple of years.
Further, the bot doesn’t typically cite sources, so you’ll often have to do significant direct research just to figure out where to find the supporting information or analysis behind the bot’s output.
Don’t be too quick to publish AI-generated content, whether visual or text based, onto your website or in the market. Take the time to evaluate the originality of the content and whether rights might be infringed by its use. Don’t let an AI time-saver turn into a costly legal entanglement.
Before uploading any of your company’s information into an AI tool for analysis, understand how that data might be accessed or used. By ensuring you follow company policies with respect to data privacy and security, you can avoid leaking proprietary information unintentionally.
Even if you do generate a whole work of content with artificial intelligence, it’s important to give the output a final review with human eyes and expert analysis. Edit it, in other words, before pushing it out into the world. This will help you avoid embarrassing artificial intelligence errors.
AI is just the latest trend in an ever-changing career landscape. University of Phoenix (UOPX) prepares students and graduates for the career paths they choose (and any curveballs along the way) with the following:
Robert Strohmeyer is a serial entrepreneur and executive with more than 30 years of experience starting and running companies. He has served in leadership roles at three successful software startups over the past decade, and his writing on business and technology has appeared in such publications as Wired, PCWorld, Forbes, Executive Travel, Smart Business, Businessweek and many others. He lives in the San Francisco Bay Area.
Jessica Roper, University of Phoenix director of Career Services, is a seasoned leader with over 15 years of experience in leadership within higher education. She has honed her expertise in student services and career development and is passionate about helping others discover and refine their skills.
This article has been vetted by University of Phoenix's editorial advisory committee.
Read more about our editorial process.
Read more articles like this: