top of page
Writer's pictureGorett Reis

The Effect of AI on Certain Careers and What We Can Do About It

Updated: Sep 6, 2023


Although we’ve used forms of artificial intelligence (AI) before, such as the video or show suggestions or programs in our smart phones, the launch of OpenAI’s ChatGPT in November of 2022 has stirred much talk and debate about the future of AI and its’ effects.


Because the potential and risks are so broad of AI, I’ll only be focusing on a segment of it: the effect it will have on certain careers and what we can do about it.


Before I discuss the different industries it will change or most likely change, I believe it’s a good idea to look at ChatGPT (generative pre-trained transformer) function as of the time I write this.


ChatGPT 3.5 is the first version and GPT 4 is the second version available to paid subscribers. Both can learn and retrieve a lot of information, synthesize the information, distill key points, and generate text or content. GPT-4 can analyze images as well as text, however, it cannot create original images on its own. It can also process and produce longer chunks of on-topic text than GPT 3.5, imitate the writing styles of specific authors, and problem solve better than its predecessor scoring in the 90th percentile for a simulated bar exam for example.


Because of these functions and results, AI will undoubtedly replace some human labour such as processing data, writing text or even programming.


The good news is that we will still need the oversight of humans. Humans can provide judgement, critical thinking, creativity, strategy (more on this later) and communications.


According to Erik Brynjolfsson, director of Stanford Digital Economy Lab, if done right, AI won’t be replacing all knowledge and information workers. It’s about adapting. Professionals such as lawyers who work with AI will replace those that don’t.


David Epstein in his book Range: Why Generalists Triumph in a Specialized World describes the inability to rapidly adapt to new circumstances and situations as learned inflexibility or cognitive entrenchment. He argues that the more a task shifts to an open world of big-picture strategy, the more humans can add to. This is because, as of yet, AI is great at automating narrow routine tasks (narrow AI). It doesn’t have the capacity to demonstrate intelligent behaviour across a range of cognitive tasks (general AI).


To Epstein, our strength is the opposite of narrow specialization. It’s the ability to integrate broadly. Our advantage is that we can figure out complex problems e.g., unclear or incomplete rules to a game, no obvious or immediate feedback, etc. So far AI cannot do that. “There’s no mind behind it, just the illusion of one” says Sophie Bushwick, a science and technology journalist and editor at Scientific American.


Epstein goes on to describe characteristics of successful adapters (people with range). These are the ability to take knowledge from one domain and apply it to another to avoid cognitive entrenchment, the ability to draw upon outside experiences and analogies to provide creative solutions and avoidance, or lack of reliance, on same old patterns.


Take this blog post for instance, I could’ve relied on ChatGPT to create it, however, there were certain things I wanted to discuss (a certain direction) but, also, I don’t believe it would have made the same connections I made here (e.g. research outside of text sources). Besides, a lot of people who use ChatGPT regularly say you still need to edit for coherency and look out for bias and misinformation or hallucinations (confidently espousing untrue information).


Speaking of bias, Joy Buolamwini, computer scientist at MIT Media Lab found that certain groups get excluded from the data AI is trained on disadvantaging them. She believes this bias is because the lack of diversity in the data used in teaching AI to make distinctions. For example, when testing self-driving cars, it was less accurate with pedestrian tracking on darker skinned individuals. Unfortunately, this bias is baked into popular hiring resume programs and algorithms. AI learns what a good hire is based on discriminatory hiring decisions, and it can be challenging to undo that training.


AI companies who are aware of this bias, are looking for prompt engineers tasked with training Large Language Models (LLMs) to continuously give users accurate and useful responses. A.I. research and safety organization, Anthropic, looking for a candidate with broad skills says, “We think A.I. systems like the ones we’re building have enormous social and ethical implications,” the company says. “This makes representation even more important, and we strive to include a range of diverse perspectives on our team.”


Some people who criticized the open letter signed about a month ago by some tech leaders and AI engineers to immediately pause developing AI so regulation can catch up, feel that you can just tweak as you go along. That the open letter caused unnecessary hype. However, if companies struggle to regulate themselves in other areas (e.g., environmentally) who’s to say they can regulate themselves in regard to AI?


In this case, regulation seems like a good idea. The European Union started to rank AI from high risk to low risk. High risk being sectors like employment and public services or any AI that would put the life and health of citizens at risk. High risk AI in the EU are now under strict regulatory obligations. The U.K. government also proposed an "adaptable" regulatory framework around AI.


If there are issues embedded in AI tech such as discriminatory bias, the black box problem (no one can understand or explain how they arrived at a specific result) and deep fakes to spread misinformation and abuse then AI systems or engineers should be able explain how and why it came up with its’ answers. Examining the exact information and parameters it was trained on in the first place is critical.


So, while the rise of AI may lead to the displacement of certain jobs, it’s important to recognize that humans still possess unique skills and abilities that cannot be replicated by AI. The key is to adapt and leverage these skills in a rapidly changing landscape. Hopefully, we can grow and develop alongside it. It’s also important for us to be mindful of the potential risks and biases embedded in AI and work towards responsible regulation and implementation. If both can happen, then AI may prove to have more benefits than risk.


Best,









Ps. If you feel you need guidance with this growing transition, apply here to see if I can help.

90 views0 comments

Comentarios


bottom of page