Artificial intelligence is seeing exponential growth within corporate spheres across the globe. While there are inherently beneficial ways to utilize AI in the workplace, large language models (LLMs) like ChatGPT don’t come without their own special set of bugs. As AI technologies advance, such as OpenAI’s recently debuted SORA (a text-to-video model boasting highly realistic imagery), ethical concerns over AI usage continue to rise, and for fair reasons. Artificially-produced and extremely explicit photos of pop star Taylor Swift caused a media frenzy earlier this year, sparking a conversation over AI legislation and regulation across industries. Moreover, academics are concerned students will favor artificial intelligence for assignments instead of using their own words, noting that any work created by the service and turned in is plagiarism by definition.
We see AI causing a stir in a wide array of industries, but we know it’s not going away any time soon despite the plethora of issues at the helm.
A 2024 article from Forbes details the technical and ethical downfalls users might experience with AI in a workplace setting. To date, LLMs like ChatGPT have issues with accuracy, bias, and cybersecurity – among other things. Additionally, a survey from Salesforce found that half of its 14,000 participants were already taking advantage of AI in the workplace without the consent of their supervisors.
If employees are already using it despite the known gaps in information and security, how can corporations adopt the practice in an ethical and efficient manner?
Responsible AI usage will be the cornerstone of effective marketing in the coming years, and if not used correctly, can be the make-or-break factor for any company’s success. Given that artificial intelligence has yet to completely master the flow of written communication, and tends to come up with false information, it's best not to use it to write an entire copy deck or news release. It can be used to get the ball rolling on idea generation and brainstorming, but the best practice is not to rely on artificial intelligence over human expertise. After all, AI missteps can quickly disintegrate a brand that may have taken years, or even decades, to build.
When it comes to other uses for AI in a work environment, it can serve as a powerhouse supporting actor, but should not be cast in the leading role. In 2023, the Harvard Business Review put together a list of ways to use AI responsibly in the workplace while still keeping employee interest at the forefront. From these, the TAG team has gathered their top recommendations for implementing AI at work.
Consent: Ensure both employees and employers are comfortable with utilizing AI in the workplace. If consent is granted amongst all parties, it is recommended to hire an AI ethics advisor to manage ethical and legal implications.
Be Transparent: Have open and honest conversations about AI and how it should be used. If you are publishing work created with the help of AI, it is best to disclose that information somewhere in the fine print. “Authentic” was Merriam-Webster’s Word of the Year for a reason.
Train and Trust: Teach any individuals who may be utilizing AI about safety, responsible use of AI systems, and trust them to use those systems accordingly. Most important, however, is trusting the creatives to, well… create.
Whether you adore or despise artificial intelligence as a workplace tool, it’s not going to be leaving us any time soon. Embrace with caution, and make sure your human employees know just how valuable they are.
Read more from TAG:
1. States Sue Meta Over Child Endangerment Concerns Online