Jobs, roles, and responsibilities that AI almost certainly cannot eliminate.

These are the jobs AI can’t replace

The following contribution is from the World Economic Forum website and is written by Ian Shine, Senior Writer, Forum Stories.

 

 

AI is unlikely to be able to replace jobs that require human skills such as judgment, creativity, physical dexterity, and emotional intelligence.

As a result, the strongest job growth between 2023 and 2027 is expected to be in agricultural machinery operators, heavy truck and bus drivers, and vocational teachers, according to the World Economic Forum’s Future of Jobs Report 2023.

The skills most in demand by employers over the next five years will include analytical thinking, empathy and active listening, as well as leadership and social influence, according to the report.

I just asked ChatGPT this question: «Which jobs can’t AI replace?»

Within seconds, it generated a 275-word answer. When I asked him to reduce it to fewer than 50 words, he was much slower. Much slower.

This perhaps confirms some of the points ChatGPT raised in his original response, when he indicated to me that AI won’t be able to replace:

– Jobs that require human judgment and decision-making

– Jobs that require complex and nuanced communication

– He also stated that AI won’t displace jobs that require:

– Social and emotional intelligence

– Creativity and innovation

– Physical dexterity and mobility

Given that ChatGPT and other forms of generative AI generate their output by synthesizing what they find on the internet, it’s no surprise that his response aligns with some of the findings of the World Economic Forum’s Future of Jobs Report 2023, a comprehensive analysis of how jobs and skills will evolve over the next five years, written by people.

Surveys conducted for the Future of Jobs Report suggest that the largest job growth between 2023 and 2027 will be in agricultural machinery operators, heavy truck and bus drivers, and vocational training teachers.

 

 

The Safest Jobs in the Face of AI

Surveys conducted for the Future of Jobs Report suggest that the greatest job growth between 2023 and 2027 will be for agricultural machinery operators, heavy truck and bus drivers, and vocational teachers.

Machinery mechanics and repairers are in fourth place.

The greatest job growth between 2023 and 2027 is expected to be for agricultural equipment operators.

This suggests that one of the greatest advantages of the human brain over AI is its connection to a real human body.

In fact, expectations that physical and manual labor could be replaced by machines have diminished, and companies surveyed for the report have revised their estimates for increased automation downward: they believe 42% of tasks could be automated by 2027, compared to the 2020 Future of Jobs Report’s predictions that 47% could be automated by 2025.

Jobs in Agriculture

According to the 2023 report, the number of jobs for agricultural professionals is projected to increase by 30% over the next five years. This represents an additional 3 million jobs.

This is only partly because workers in this sector are much less likely to be affected by generative AI and Large Language Models (MLM) like ChatGPT.

Other reasons include the shortening of supply chains, as more and more small farms sell directly to consumers rather than through intermediaries.

The growing use of agricultural technologies and increased investments in climate change adaptation are also driving an expansion of agricultural jobs.

However, so-called «climate-smart agriculture,» which addresses the interrelated challenges of food security and accelerating climate change, is not only increasing employment but also improving living standards and environmental outcomes, as well as food security and crop resilience, according to the World Bank.

Jobs in Education

According to the Future of Jobs Report 2023 surveys, employment growth of 10% in the education sector is projected by 2027.

This could mean an additional 3 million jobs in vocational training and higher education.

«This growth is particularly prevalent in non-G20 countries, where it is expected to be approximately 50% higher than in G20 countries,» the report notes.

The high adoption of education and workforce development technologies is considered an important driver of job creation.

As in the agricultural sector, the transition toward localized supply chains is expected to be one of the largest gross job generators in the logistics industry.

 

 

Closing the Skills Gap

The other factor is organizations’ growing efforts to close the skills gap, as AI and other technologies redefine the capabilities needed by employees and employers.

The most in-demand skills, as ChatGPT mentioned, are analytical and creative thinking. Other skills at the top of employers’ wish lists include:

Empathy and active listening

Motivation and self-awareness

Leadership and social influence

Talent management

Service orientation and customer service.

These are all highly human skills that fall outside the AI ​​skill set.

Companies are expanding their training programs as AI alters skill requirements.

Jobs in Supply Chain and Logistics

As in the agricultural sector, the transition to localized supply chains is projected to be one of the largest gross job generators in the logistics industry.

However, the 2023 Future of Jobs Report notes that it could also lead to job losses, supply shortages, rising input costs, and a global economic slowdown, at least in the short term.

“The new economic geography brought about by the transformation of supply chains and a greater emphasis on resilience over efficiency is expected to generate net job growth, with benefits particularly for economies in Asia and the Middle East,” says Saadia Zahidi, managing director of the World Economic Forum.

According to the 2023 report’s surveys, a net increase of 2 million jobs,

equivalent to 12.5% ​​of the workforce, is projected in the supply chain and logistics sector.

Trends in this field are being affected by a shortage of heavy-duty truck and heavy-duty truck drivers by mid-2022.

However, relatively low expectations about the impact of autonomous vehicles on job creation also suggest that the driving profession is unlikely to disappear in the near future.

There are relatively low expectations about the impact of autonomous vehicles on job creation.

«The future of the white-collar worker is more threatened than that of the Uber driver, because we don’t have self-driving cars yet, but AI can certainly write reports,» Martin Ford, author of «Robots Rule: How Artificial Intelligence Will Transform Everything,» told the BBC.

«In many cases, more highly educated workers will be more threatened than less-skilled ones. Think of the person who cleans hotel rooms: it’s really hard to automate that job.»

Overall, 50% of organizations expect AI to generate job growth, while only 25% believe it will lead to job losses, according to the Forum report.

When I asked ChatGPT for its opinion on whether more jobs will be created or lost due to AI, this is what it had to say:

“The impact of AI on job creation or loss is a complex and multifaceted issue, and the answer will depend on various factors, such as the specific sector, the type of job, and the level of implementation and adoption of AI technology.”

Perhaps it’s time for ChatGPT to conduct one of those refresher courses that cover analytical thinking.

 

 

More than 120 jobs that AI won’t replace

The following contribution is from the UpWork portal, which defines itself as follows: For more than two decades, Upwork has pioneered a better way to work. We’ve helped businesses and professionals thrive through major changes, from migrating to the cloud, harnessing the power of mobile technology, and creating new value through social media. No matter how needs and skills evolve, our purpose remains the same: to create opportunities in every era of work.

Today, we find ourselves at a new frontier. AI is transforming the possibilities for both businesses and careers. Once again, Upwork is the meeting point between businesses and talent right now. Our platform enables everyone—from Fortune 100 companies to ambitious startups—to access the human and AI expertise they need to act quickly, solve problems, and grow. Powered by our conscious AI work agent, Uma™, our AI-powered operating system accompanies you every step of the way to turn your aspirations into reality.

And the author is Emily Gertenbach, a B2B copywriter who creates SEO content for people, not just algorithms. As a former news correspondent, she loves to thoroughly research and analyze technical topics. She specializes in helping freelance marketers and marketing technology SaaS companies connect with their ideal clients through organic search.

 

 

 

There are many jobs that AI won’t replace; in fact, AI automation could even help professionals in these fields do their jobs better than before.

Artificial intelligence has many great applications in the workplace, from automating customer service workflows to quickly processing large scientific data sets.

But this doesn’t mean AI will completely displace human workers; in fact, off the bat, we can think of at least 120 jobs that AI won’t replace.

Healthcare Jobs That AI Can’t Replace

“The new economic geography brought about by the transformation of supply chains and a greater emphasis on resilience over efficiency is expected to generate net job growth, with benefits particularly for economies in Asia and the Middle East,” says Saadia Zahidi, Managing Director of the World Economic Forum.

 

 

AI tools like ChatGPT can’t replace

the individual human interaction and interpersonal skills possessed by healthcare professionals.

From direct care to mental health support, people will remain an essential part of healthcare.

That said, healthcare is not immune to the impact of AI.

Generative AI in healthcare is useful for many reasons, including assisting care teams with repetitive tasks such as transcribing clinical notes or organizing data sets in Excel.

Advanced to Mid-Level Clinical Positions

Nurses and Nurse Practitioners

Psychiatrists and Therapists

Midwives and Gynecologists

Physical Therapists

Pharmacists

Veterinarians

Dentists

Wellness and Preventive Health Positions

Nutritionists and Dietitians

Health Coaches and Wellness Consultants

Mental Health Counselors

Occupational Therapists

Entry-Level and Support Positions

Medical Assistants

Home Health Aides and Caregivers

Medical Receptionists

Peer Support Specialists

Veterinary Technicians

Dental Hygienists

Healthcare Jobs That AI Can’t Replace

Creative Jobs That AI Can’t Replicate

Many creative endeavors involve creating something with one’s hands; such imaginative and inventive endeavors will be difficult to replace with any type of AI.

Even computer-based creative work, such as that performed by writers and graphic designers, is unlikely to be completely replaced by machines. While AI tools, such as an AI image generator, may appear to create unique creative or artistic content, they actually only recombine elements provided to them through training data.

The output of an AI tool is always predictive, based on algorithms and the experience it has been exposed to thus far.

This software simply cannot emulate human creativity and lacks what scientists call neuroplasticity (the brain’s ability to form new connections between neurons) and the ability to fine-tune the functioning and communication between different brain regions.

Established artistic positions

Theater actors and dancers

Jewelry designers and glassblowers

Muralists and live musicians

Entry-level and emerging creative positions

Illustrators and cartoonists

Creative assistants

Social media content creators

Makeup artists and face painters

Artists and DIY entrepreneurs

Artificial intelligence has many excellent applications in the workplace, from automating customer service workflows to rapidly processing large scientific data sets. But this doesn’t mean that AI will completely displace human workers; In fact, off the bat, we can think of at least 120 jobs that AI won’t replace.

 

 

Specialized Jobs AI Can’t Do

An AI chatbot might be able to provide information on how to properly install a pipe or wire an electrical outlet, but it can’t perform the necessary tasks or be reliable for practical, real-time problem-solving.

There will always be a need for professionals with expertise in skilled trades, construction, engineering, and other disciplines related to home or infrastructure construction.

These professionals can turn to AI and automation to streamline tasks like data processing, invoicing, and customer account management, but human workers will still perform the practical work.

Established Skilled Jobs

Electricians

Plumbers

HVAC Technicians

Auto Mechanics

Entry-Level Jobs and Support Positions

Carpentry and Woodworking Apprentices

Construction Workers

Utility Markers and Site Guides

Painting Assistants and Preppers

Skilled Labor Jobs AI Can’t Do

Teaching and Academic Jobs Safe from AI

While AI is certainly a useful tool for research and study, or even a language practice partner, it can’t replace instruction from a knowledgeable human. Learning directly from someone with expertise in a specific field can give you insights you won’t get by interacting with AI, reading a book, or watching a movie.

Humans will also remain a vital part of academic research.

Since generative AI tools can only extract information from existing datasets, they can’t find new information (or conduct an archaeological dig, for that matter).

Professors and lecturers are likely to use AI in their work, whether organizing lesson plans, transcribing audio, and more.

However, we will still need staff to work in education and research.

Established Educational Roles

Classroom Teachers (K-12)

University Professors

Tutors and Learning Advisors

Research and Scholarship

Historians and Anthropologists

Archaeologists

Educational Researchers

Entry-Level and Support Roles

Teacher Assistants (TAs)

Library Technicians

After-School Program Facilitators

Museum Educators and Educational Assistants

 

What AI Can Do in Education vs. What Only Humans Can Do

AI can: Only humans can:

Generate quizzes

Lead debates and group discussions

Grade multiple-choice tests

Support students’ emotional well-being

Translate text

Read visual and physical cues that indicate distress or confusion

Summarize data

Teach courage, empathy, and creativity

Generative AI in healthcare is useful for many reasons, including helping care teams with repetitive tasks like transcribing clinical notes or organizing data sets in Excel.

 

 

Service Jobs AI Won’t Replace

From planning and supervising events to cutting hair or cleaning a house, countless service and personal care jobs will continue to require a human touch.

Some of these professionals may use AI technologies to collect data or perform tasks such as planning schedules and creating marketing materials.

Ultimately, however, all of these jobs will continue to require a human connection and empathy.

Experienced service roles:

Massage therapists

Hairdressers and barbers

Manicurists

Tattoo artists

Estheticians

Tailors

Entry-level and support positions

Salon assistants

Spa receptionists

Dog grooming assistants

Dog grooming assistant

Leadership, legal, and business roles that AI won’t replace

Harnessing the power of generative AI is an indicator of potential business success.

Our «Innovators at Work» report found that 57% of the most innovative companies are ready to integrate AI into their operations, compared to only 42% of non-work-related innovators.

But while AI software can drive business innovation, it can’t replace human leadership or emotional intelligence. MIT researchers found that AI models can make harsher judgments than humans.

This indicates that people capable of perceiving nuances that training data sets can’t provide are crucial in business, human resources, law, and other fields where balanced decision-making is required.

Experienced Positions

Judges and Legal Mediators

Business Strategists and CEOs

HR Managers and DEI Leaders

Ethics Officers and Policy Officers

Entry-Level and Support Positions

Paralegals and Legal Assistants

Executive Assistants and Chiefs of Staff

HR Coordinators and People Operations Associates

Compliance Analysts

Take the First Step Toward a Smarter Talent Strategy

Find Talent

Sports and Adventure Jobs AI Can’t Replicate

 

In 2020, COVID-19 put live sports on hold, so the National Hockey League (NHL) broadcast televised esports games in which professional hockey players tried their hand at an NHL video game. Fans watched the games, but, unsurprisingly, they were thrilled that live sports were returning that same year… and so were the athletes.

Software-generated sports games can’t compete with watching trained professionals play in person.

AI will not replace professional athletes, coaches, adventure tour guides,

or anyone else involved in specialized (and potentially dangerous) physical activities.

Professional and mid-career roles

Professional athletes

Team coaches

Adventure sports instructors

Scuba and sailing guides

Personal trainers

Entry-level and support roles

Sports coaches and fitness assistants

Recreational leaders

Lifeguards

Assistant coaches and sports camp instructors

Why can’t AI compete in sports?

Skills needed in sports?

Can AI achieve this?

Instant decision-making ✅ ⛔

Ability to interpret crowd reactions ✅ ⛔

Muscle memory ✅ ⛔

Empathy ✅ ⛔

Ability for face-to-face coaching ✅ ⛔

Many creative endeavors involve creating something with your hands; those imaginative and inventive endeavors will be difficult to replace with any kind of AI. Even computer-based creative work, such as that done by writers and graphic designers, is unlikely to be completely replaced by machines.

 

 

Green jobs that AI can’t handle

Farmers, conservationists, and environmental advocates often perform hands-on work, from visiting forests and rivers to tilling the land and planting crops. This type of work can’t be completely replaced by machine learning and automation; someone must collect samples, interact with local people, and explore new discoveries.

That said, AI-related technological advances can help conservationists monitor migration patterns, detect poachers, and limit smuggling or illegal logging. AI technology can also help farmers monitor the health of crops and livestock.

Mid-career and specialized positions

Farmers and crop managers

Arborists and tree surgeons

Conservation scientists

Environmental engineers

Entry-level and field positions

Forest rangers and trail managers

Field research assistants

Urban agricultural technicians

Sustainability outreach coordinators

Public service and emergency response jobs that AI cannot perform

Jobs that require close interaction with the community, including lifesaving, will also continue to require human presence.

From first responders to NGO workers and postal workers, a wide range of jobs involve skills such as manual labor, decision-making, and empathy—all characteristics that machines cannot replicate.

 

However, don’t be surprised to see these professions using AI to improve their work.

As in academia, conservation, and other fields, public service workers can turn to AI to optimize parts of their jobs, such as performing diagnoses, finding the best driving route, or translating languages ​​on the fly at sea.

Experienced and High-Risk Positions

Paramedics and Emergency Medical Technicians (EMTs)

Firefighters

Police Crisis Negotiators and First Responders

Social Workers

911 Dispatchers

Entry-Level or Community-Facing Positions

Community Organizers

Public Health Workers

Shelter Coordinators and Housing Advocates

Cross-Cultural Liaisons and Translators in NGOs

Why Can’t AI Replace First Responders?

Task: Is Staff Needed? Are They Ready for AI?

Comforting a child in crisis ✅ ⛔

Making triage decisions ✅ ⛔

Deescalating a domestic dispute ✅ ⛔

Recording the location of fire hydrants ⛔ ✅

Infrastructure work that AI can’t replace

While we may see a large portion of utility operations enhanced by AI in the future, system monitoring and hands-on repairs will still require a human touch.

For example, a natural gas company could use these new technologies to monitor pipelines and detect leaks.

If AI detects a potential risk, human experts can examine, repair, and secure the portion of the infrastructure detected by the technology.

Technical and specialized roles:

Power line installers

Gas pipeline inspectors

Hydroelectric plant operators

Nuclear facility technicians

Entry-level and support roles

Utility field technicians

Maintenance trainees

Control room attendants

Meter readers and field asset testers

Spirituality and ethics roles that AI cannot achieve

Again, the lack of empathy present in large language models means that AI is unlikely to replace jobs related to religion, ethics, and philosophy.

Unlike humans, machines cannot feel empathy or compassion for another being or offer emotional support or mentoring. AI also cannot consider morality during complex decision-making that requires human judgment.

Experienced Positions

Priests and Pastors

Ethicists and Philosophers

Entry-Level and Apprenticeship Positions

Youth Ministry Assistants and Religious Education Assistants

Chaplains-in-Training and Seminarians

Ethics Research Assistants

Interfaith Program Coordinators

Humans will also remain a vital part of academic research. Since generative AI tools can only extract information from existing data sets, they can’t find new information (or conduct an archaeological dig, for that matter).

 

 

Communication Careers Safe from AI

Traditional and specialized media platforms are filled with discussions about how to use AI in research, journalism, and even video production; however, AI will not completely replace all media and communications-related jobs.

While an investigative journalist can use AI for data analysis or a seminar leader can turn to it to transform speeches into new material, most of these functions will continue to be performed by humans.

Experienced and High-Trust Positions

Investigative Journalists

Speakers and Moderators

TV Presenters and Talk Show Hosts

Editorial Columnists

Entry-Level and Emerging Positions

Journalism Assistants and Fact-Checkers

Junior Social Media Managers

Podcast Production Assistants

Event Hosts and Campus Speakers

How Humans Can Work with AI in Communications Positions

AI can. Humans should:

Write a script. Read a space.

Generate content outlines. Interview sources.

Repurpose headlines. Uncover new truths.

Deliver basic storytelling. Modulate tone and emotion based on live audience reactions.

Culinary and beverage jobs that AI can’t taste.

The food and beverage industry has many jobs that AI won’t be able to replace.

Of course, chefs, bakers, and brewers can use AI to organize recipes, develop new ingredient combinations, and plan menus.

But many of the core tasks these professionals perform can’t be handled solely by automation, machine learning, or even robots.

In the future, a skilled human professional will still need to be the one to saute, glaze, taste, distill, and perform other tasks that require a combination of skill, adaptability, and good taste.

Expert and artisan positions

Chefs and executive cooks

Sommeliers

Artisan bakers and chocolatiers

Artisan brewers and distillers

Entry-level and support positions

Line cooks and prep cooks

Baristas

Catering assistants

Food stylists and plating assistants

These are not the only jobs that are unlikely to be replaced by machines in the future.

Human contact will continue to be a fundamental requirement in many other industries and positions.

From the skills required by diplomats managing international relations to the tasks performed by home inspectors to ensure homeowners make wise purchases, many priority positions for people will remain in the labor market.

From planning and supervising events to cutting hair or cleaning a house, countless service and personal care jobs will continue to require a human touch. Some of these professionals may use AI technologies to collect data or perform tasks such as planning schedules and creating marketing materials.

 

 

Role: Why can’t AI do this?

Diplomat: Diplomacy requires understanding verbal, physical, and cultural nuances in high-risk and crisis environments, something a pre-trained algorithm cannot do.

Home inspector: An inspector assesses structural cues (visually and with instruments), which a single AI might not be able to adequately interpret.

Politician: Political roles involve managing relationships, emotions, public perception, cross-cultural communication, and more, all while planning next steps.

Humanitarian worker: Humanitarian workers can provide practical medical assistance, make lifesaving decisions on short notice, and navigate crisis environments in ways that AI cannot.

Restoration specialist: A restoration expert uses fine motor skills, dexterity, visual cues, and physical instruments to create precise adjustments to fragile documents; AI and robotic systems cannot replicate this.

Improv Artist: Improv comedians and actors often build emotional connections with their audiences and even involve viewers in their performance, all while changing their act based on external cues.

You can also follow related career paths to advance to an experienced position that is safe from AI-related job loss. These include:

Starting as a diplomatic aide and becoming a senior member of the Foreign Service.

Working as a real estate intern and eventually becoming a licensed home inspector.

Working as a legislative aide before running for political office.

Using practical experience as an artist to inform a career as an art therapy assistant or licensed art therapist.

Volunteering as a crisis counselor before pursuing social work.

 

Tech roles enhanced, not replaced, by AI.

There are also many roles that are impacted by AI, but not completely replaced. Many tech professionals are increasingly using AI-based tools in their work to increase efficiency, test new ideas, and more.

These tech positions can benefit from AI, but still require human-centric skills:

Software developers

Data analysts and scientists

Machine learning engineers

Cybersecurity experts

Strengths of AI vs. Human Advantages: Why Some Jobs Are AI-Proof

AI Strengths Human Advantages

Pattern Recognition

Emotional Intelligence

Data Processing Speed

Moral Judgment

Task Automation

Sensory Adaptation

Language Generation

Creative Improvisation

Information Recall

Critical Thinking

Human Work in an AI World

All of the jobs on this list require skills that only a human can offer, such as empathy, moral decision-making, interpreting visual cues, responding to improvised situations, developing new philosophical ideas, and developing creativity.

But that doesn’t mean there isn’t room for creatives, philosophers, actors, electricians, and others to benefit from AI systems that assist with data entry, analysis, brainstorming, scriptwriting, and other tasks.

It all comes down to finding the balance between AI efficiency and human skills. Find forward-thinking freelance careers that allow you to combine your unique human qualities with innovative and exciting technology: create your profile on Upwork today.

 

 

11 Jobs AI Could Replace by 2025 and More Than 15 Secure Jobs

The following contribution is from the Forbes website and is written by Rachel Wells, Contributor. Rachel Wells is a writer covering leadership, AI, and skills development.

 

 

Abstract technology background. Business marketing in global networks. Global business concept.

AI is creating 97 million new jobs.

AI is transforming jobs at a faster rate than any other known workforce revolution in recent history.

At the beginning of the 20th century, industrial automation replaced the roles of thousands of artisans and small factory workers. At the end of the 20th century, ATMs began to revolutionize the banking sector and temporarily affected the jobs of tellers.

In the early 2000s, the wave of e-commerce and the internet impacted large sectors of retail workers and companies like Blockbusters (who remembers that?).

We’re barely past the first quarter of 2025, and Meta has already announced it will cut approximately 5% of its global workforce, or 3,600 employees, with underperformers being the first to go. (And AI didn’t really come to the forefront until 2022; let’s reflect on that.)

However, as Jason Snyder points out for Forbes, «It’s not about performance, it’s about priorities. While Meta presented the layoffs as a way to eliminate underperforming employees, many affected workers have pushed back, arguing that the company prioritizes AI-driven efficiency over human labor.»

Snyder continues: «Mark Zuckerberg has openly stated that Meta wants to raise the talent bar and accelerate hiring in AI and machine learning roles immediately after the cuts.

The layoffs began on Monday. Hiring for AI-focused roles began on Tuesday.»

Also remember that Meta is not the only company taking this stance. Several other large companies have followed suit, laying off thousands of workers in an attempt to become more efficient and prioritize AI.

As in academia, conservation, and other fields, utility workers can turn to AI to streamline parts of their jobs, such as performing diagnostics, finding the best driving route, or translating languages on the fly at sea.

 

 

Clearly, the workforce is being restructured in another AI industrial revolution.

Data from the World Economic Forum (WEF) shows a positive side effect.

While it’s true that jobs are being eliminated, a 2020 WEF report, prior to ChatGPT, suggests that despite 85 million positions being eliminated, 97 million new jobs are expected to emerge, specifically in fields such as data science, AI development and monitoring, and AI and human collaboration roles.

 

We have seen evidence of the positive results of an industrial revolution throughout history.

With the examples of past industrial revolutions, it’s true that jobs have been completely eradicated, but they have been replaced by positions more suited to the times and the wave of new technologies and innovation.

Therefore, it shouldn’t be difficult to imagine that the AI ​​and robotics revolution will have the same results: eliminating jobs and creating an entirely new labor market.

11 Jobs Most at Risk of Being Replaced by AI in 2025

Employers have already identified the implementation of artificial intelligence as one of their key business priorities for 2025 and beyond. Nine in ten say they expect to use AI and generative AI-based solutions over the next five years, and 73% admit to prioritizing the hiring of AI talent. The question is, as more companies embrace this new era, which jobs are most at risk of disappearing?

This is a crucial question on the minds of many American workers, as 52% are concerned about the impact of AI on their jobs, according to a new Pew Research study of more than 5,000 American professionals.

The job board Indeed has just released a new list of jobs at risk of automation and defines automated roles as «tasks that machines or software programs can perform without human intervention.»

They are typically routine or repetitive actions that require a high degree of precision. They can include simple tasks, such as making phone calls, or complex processes, such as analyzing data or processing transactions. In industrial settings, automated tasks are often those that humans perceive as undesirable. Their list includes:

Manufacturing jobs (machine operation, product handling, testing, packaging, etc.)

Retail and retail roles (customer service, inventory management, fraud analysis)

Transportation and logistics jobs (human drivers are being replaced by autonomous vehicles, as we’re already seeing with Waymo)

Entry-level data entry, analysis, and visualization jobs

Financial analysis and forecasting roles

Travel agents and itinerary providers

Translators

Tax preparation and entry-level accounting positions

Other positions at risk of expiration or in lower demand, not explicitly mentioned on Indeed’s list, include:

Proofreaders

Legal assistants

Graphic designers

Businesswoman, AI, and hologram laptop, user experience, and networking for the forex trading icon in the office using a virtual platform. Woman using global technology for futuristic communication and innovation

Industrial revolutions always bring unwanted changes, but if you hide and pretend they don’t exist or fight against them, your career will fail and you will have no future in the job market.

 

More than 15 jobs safe from AI in 2025

So, which roles are safe from the threat of automation so you can future-proof your career and plan accordingly?

AI Jobs (AI Design)

Well, obviously, the first group of jobs you’d expect to be safe from automation are AI jobs, or roles that enable AI to function as it should.

This includes:

Machine learning engineers

Software developers

Data scientists

Cybersecurity engineers

AI agent managers

AI-powered jobs (AI Collaboration)

The next group of roles safe from AI are those that work in collaboration with it, not independently or against it, pretending it doesn’t exist, or treating it as a threat.

These are positions that require high levels of specialization, a personal touch, or specifically demand in-person interaction with a real person. They also tend to be more creative and require human decision-making or insight.

You’ll be reassured to know that there are many positions that fall into this category, spanning education, healthcare, and business. For example:

Registered Nurses

Choreographers

Paramedics

Mental Health Specialists and Counselors

Teachers (from primary and secondary to high school) and higher education, lecturers, instructors, and higher education professors

Civil Engineers

Surgeons

Project Managers

Directors and Operations Managers

Musicians

Journalists

To be clear, no one is saying that the aforementioned positions won’t be impacted by AI; rather, anyone in these roles will need to adapt their work, update their skills and knowledge, and find ways to incorporate AI-based tools and intelligence to help them focus on the more complex aspects of the job.

What’s even more exciting is that, by 2030, 85% of jobs will be new jobs we haven’t even heard of, according to the World Economic Forum.

New jobs will be created (and are already in the process of being created), so you’ll have ample room to apply your skills and experience in collaboration with the new workforce.

While we may see a large portion of utility operations enhanced by AI in the future, system monitoring and hands-on repairs will still require a human touch. For example, a natural gas company could use these new technologies to monitor pipelines and detect leaks.

 

 

Should I be worried?

There’s no point in ignoring reality or treating AI as an enemy of your career.

In an interview at IBM’s London headquarters for this Forbes article, Justina Nixon-Saintil, a vice president at IBM, emphasized: «Learning doesn’t stop. There are constantly new technologies; they’re accelerating at a much faster pace than ever before. Today it’s AI, tomorrow it could be quantum. AI will impact almost every job and affect every industry.»

She concluded: «Whether you work in the services sector, at a tech company, or in finance, everyone needs to upskill and understand what AI means for their role.»

So no, you shouldn’t worry.

Instead, prepare a detailed career action plan to stay ahead of the curve and ensure that every upskilling course you take, every new job offer you accept, or every position you seek is strategically aligned with your career goals and the needs of the ever-evolving workforce.

Take the time to familiarize yourself with AI, especially considering that employers struggle to find talent with AI skills and prefer to hire a worker with AI skills over one without.

If you blindly act as if nothing will happen and wait for your job to be uprooted, then yes, you should be concerned.

The positions with the lowest risk of automation are those driven by humans and require deep problem-solving and interaction.

Frequently Asked Questions About Job Loss with AI

Which jobs are at risk due to AI?

If your position is highly repetitive or involves tasks that are often boring and monotonous, it probably is.

The more human effort and intuition required, the lower your risk.

Which jobs will AI never replace?

Leadership roles, childcare, education, healthcare, technical design such as landscaping and architecture, and positions that require problem-solving and human interaction, as well as positions that enable AI to operate.

What should I do if my job is at risk?

Don’t panic. Create an action plan and focus on three quick actions or steps you can take now to easily adapt.

Learn automation tools and practice them so you’ll be in demand when employers want your talent and improve your skills in promising areas. You should also obtain AI-related certifications.

 

 

 

The Dangerous Impact of AI on Decision-Making

The following contribution is from Forbes and is written by Kamales Lardi, a member of the Forbes Business Council. He is also CEO of Lardi & Partner Consulting, a leader in digital transformation that leverages neuroscience and human-tech psychology.

 

 

 

Artificial intelligence (AI) is at its peak: everyone wants it or claims to have something to do with it. And, frankly, AI’s transformative potential can be seen across many industries, transforming the way businesses operate and compete.

As organizations become convinced of the incredible potential of AI applications in business, management teams increasingly view it as a silver bullet for business challenges and growth opportunities.

As AI-based technologies become more sophisticated and ubiquitous across the global business landscape, the tendency to blindly trust AI as inherently «intelligent» and objective is rapidly increasing.

Overreliance on AI can breed complacent behavior and reduce critical thinking, which is essential for decision-making.

Understanding Human Decision-Making with the Accumulation-to-Threshold Model

Before we understand how AI influences decision-making, we must understand the human process.

From a neuroscientific perspective, the accumulation-to-threshold model offers a simple and intuitive way to explain how the human brain makes decisions.

Imagine your brain has buckets that fill up as it gathers information. Once a bucket is full (the threshold is reached) and enough information has been collected, a decision is made.

Let’s look at a simple example of how this works. If you decide to cross the street, your brain gathers information from the surrounding world to fill the «wait» or «walk» buckets. When you notice the traffic light is red or there are no cars, your «walk» bucket fills up. On the other hand, if you see cars approaching or the light is green, your «wait» bucket fills up. When there is enough evidence in the «walk» bucket, your brain sends a signal that it is safe to cross the street.

In this process, access to accurate information, such as visual cues and credible data, has a major influence on decisions. As in the previous example, on a foggy night, we receive fewer visual cues.

It may not be clear whether the traffic light is red or green. Therefore, the box is more difficult to fill, and decision-making is slowed down, or the likelihood of errors increases.

Furthermore, our values, priorities, beliefs, and personal biases

are reflected in the decision-making process.

They act as levers that control which box information is directed to, as well as the speed with which it flows. For example, one person may categorize the yellow traffic light as «wait,» while another may categorize it as «walk faster.»

Influence of AI on Decision-Making

How do AI systems impact the accumulation-to-threshold model and our ability to make objective decisions?

AI systems can influence the type and quality of information available.

Algorithms can enhance echo chambers by highlighting data or information that aligns with specific preferences.

This limited exposure to diverse viewpoints or information results in biased decisions, which restricts the human brain’s natural ability to adapt and challenge biases during the decision-making process.

A common example of this is the engines on social media platforms that shape the information we see in our feeds and influence the evidence we gather.

 

 

Another danger of AI is its ability to produce hyper-realistic content, such as deepfake videos and simulated voices.

While these technological advances are compelling evidence of AI’s progress, they can also damage the integrity of information. These convincing AI-generated results can influence our emotions, prompting us to make impulsive decisions.

Not only can this lead us to rely on misinformation, but it can also erode trust in genuine content.

It is clear that we are reaching a crucial point in technological development, where AI can blur the line between what is real and what is fake.

With this, our ability to make decisions based on visual content may be compromised, with significant implications for our world.

The food and beverage industry has many jobs that AI won’t be able to replace. Of course, chefs, bakers, and brewers can use AI to organize recipes, develop new ingredient combinations, and plan menus. But many of the core tasks these professionals perform can’t be handled solely by automation, machine learning, or even robots

 

 

Ensuring Responsible Decision-Making

As users of AI systems, both individuals and companies must be educated about their limitations, especially those related to bias and reliability.

While AI is a powerful tool that can assist in decision-making, people should be encouraged to critically evaluate its recommendations and avoid blind trust.

This involves proactively fostering a culture of skepticism, ensuring that we effectively question and test the credibility of the results generated by AI systems.

As AI is increasingly adopted for business purposes, organizations must focus on fostering psychological safety and empowering employees to raise concerns when AI insights differ from expectations.

The key to successful AI adoption in business lies in ensuring we can maintain a balance in harnessing its potential while safeguarding the human ability to maintain objectivity and common sense in decision-making.

Forbes Business Council is the leading growth and networking organization for entrepreneurs and leaders. Do I qualify?

 

 

 

Introduction to AI Governance

The following contribution corresponds to the Modulos portal, which is defined as follows:

Mission

We help organizations develop and operate AI products and services in a new regulated environment.

Vision

We offer an AI Governance Platform that seamlessly integrates AI governance, risk management, and data science, ensuring your organization complies with regulations while innovating.

Authored by the team.

 

 

AI governance is a fundamental aspect of the responsible development of AI.

Its objective is to create a framework for its responsible and ethical use, protecting the rights and freedoms of individuals. But what exactly is AI governance and why is it important?

Let’s review the basics.

What is AI Governance and why is it important?

AI governance is a set of principles, regulations, and frameworks that guide the development, deployment, and maintenance of AI technologies.

It considers various aspects such as ethics, bias and fairness, transparency, accountability, data governance, and risk management.

Its main objective is to ensure the ethical and responsible use of AI.

Its importance lies in the ability to mitigate the risks associated with AI applications, including bias, privacy violations, and unexplained results.

Proper AI governance builds trust among users and stakeholders. It ensures that AI technologies are used for beneficial purposes and comply with legal and societal expectations.

Fundamental Principles of AI Governance

At the core of AI governance are some fundamental principles that guide its development and implementation. These include:

 

Ethical Principles

AI governance should follow ethical principles, ensuring that AI applications respect human rights and fundamental freedoms.

Transparency

AI should be transparent to users and stakeholders, promoting trust in the development, implementation, and use of the technology.

Accountability

Those who develop or implement AI technologies must be held accountable for any harm they cause.

Equity

AI governance must promote equity, preventing discrimination and bias in the development and use of AI applications.

Risk Management

We need proper risk assessment and management to identify and mitigate potential risks associated with AI technologies.

Auditability

AI systems must be auditable. The processes and decisions they make must be easily traceable and explainable.

Human Oversight

Humans must have some level of control and decision-making over AI systems to ensure ethical use.

These principles are the foundation of responsible AI governance. It is essential to consider them in any framework or regulation related to AI. To understand why companies and governments invest in AI governance, let’s take a closer look at its historical development.

 

 Historical Context and Development of AI Governance

The concept of AI governance is not new. It has emerged and evolved alongside the advancement and diffusion of AI technologies.

In its early days, AI governance was a relatively ignored area, given its experimental nature.

However, as the potential implications and impacts of AI became clearer, the need for structured governance became crucial.

In recent years, high-profile incidents involving AI have highlighted the need for governance.

For example, in one troubling episode, the Netherlands suffered a significant scandal stemming from the misuse of AI.

Thousands of lives suffered serious consequences when a Dutch tax authority used an algorithm to identify suspected social benefit fraud.

This scandal became known as the «toeslagenaffaire» or childcare benefit scandal.

The Dutch tax authorities used a self-learning algorithm to create risk profiles and detect fraud.

However, the system had flaws. Based on the system’s risk indicators, families suspected of fraud were sanctioned.

It led to the impoverishment of tens of thousands of families, and some victims even resorted to suicide.

This debacle highlights the potential devastation that automated systems without the necessary safeguards can cause.

From the skills required by diplomats managing international relations to the tasks performed by home inspectors to ensure homeowners make wise purchases, many priority positions for people will remain in the labor market.

 

 

Amazon faced similar challenges with its AI recruiting tool,

which was found to be biased against women.

The tool, developed in 2014, used machine learning to review resumes and rate job applicants.

Amazon designed it to streamline the talent acquisition process, assigning scores to candidates in the same way Amazon shoppers rate products.

However, by 2015, the company discovered that the system was not rating female candidates for technical positions in a gender-neutral manner. This was because the training data was biased, as most resumes came from men, reflecting the male dominance in the tech industry.

The algorithm therefore found male candidates preferable, even penalizing resumes that included the word «women’s.»

 

This ultimately led Amazon to dismantle the project.

Another example is a recent settlement with the Equal Employment Opportunity Commission (EEOC) related to alleged AI bias in hiring.

The EEOC v. iTutorGroup case addressed the allegation that iTutorGroup’s AI recruiting tool exhibited age and gender bias.

This resulted in the rejection of male applicants over 60 and female applicants over 55.

 

The defendants denied these allegations. However, the settlement highlights that AI tools that cause unintended discriminatory outcomes can lead to serious legal consequences.

Incidents such as these have generated a growing demand for frameworks and regulations to manage the development and application of AI.

Over the years, various stakeholders, including policymakers, industry leaders, and academic researchers, have proposed different frameworks and models for AI governance, each contributing to its development.

This has led to the recent introduction of the EU AI Regulation and other relevant laws.

But before we delve into these regulations, let’s first understand why companies should invest in AI governance.

Why do companies need AI governance?

Without proper governance techniques, organizations run a significant risk of legal, financial, and reputational damage due to the misuse and biased outcomes of their algorithmic inventory.

Therefore, AI governance is not just a mandatory requirement but a strategic necessity to mitigate these threats and, on a larger scale, promote trust in AI technologies.

Companies that use AI in their products have an obligation to implement responsible governance structures and a strategic incentive to do so.

Having comprehensive oversight and understanding of their AI inventory will mitigate the threats posed by inadequate governance and make it easier to monitor and update operational practices in accordance with evolving risks and regulations.

Furthermore, with the introduction of the EU AI Act and similar regulations, companies that proactively implement responsible AI governance practices will have a competitive advantage over those that do not. Demonstrating accountability and transparency in the use of AI technologies is increasingly important.

 

 

 

Transforming Business Decision-Making with Human-Centered AI

The following contribution is from the TATA Consultancy Services portal, which defines itself as follows: We deliver excellence and create value for our clients and communities. Our dedicated and experienced team puts our shared values ​​into practice every day. With the best talent and the most advanced technology, we help our clients turn complexity into opportunities and drive meaningful change.

The authors are:

 

Siva Ganesan

Senior Vice President and Global Head, AI & Data Business Unit, TCS

Ashok Krish

Vice President and Head, AI Practice, TCS

Sankaranarayanan “Shanky” Viswanathan

Vice President, Corporate Technology Office, TCS

 

 

Summary

There is a collective sense of momentum around the rapidly evolving AI landscape.

However, amid this progress, a deep misunderstanding about its true transformative potential has taken root.

While organizations often view AI as a tool for process automation and optimization, the real revolution lies in reinventing the way people work and make decisions.

Traditional workflows and business processes are designed for humans and transactional systems of record.

A successful AI transformation focuses on people, not processes or systems.

For humans and machines to collaborate optimally, value chains must be reinvented from the ground up.

Intelligent choice architectures (ICAs) are the natural next step in ways of working.

Intelligent choice architectures

ICAs represent a profound shift where AI systems adapt to humans, not the other way around.

ICAs are built on a foundation of data (both system information and human contextual tacit knowledge), models that encapsulate the knowledge, and agents that perform the actual work.

Fundamentally human-centered, AI leverages the power of predictive and generative AI and applies it to the inherently subjective, nuanced, and contextual nature of knowledge work.

In this way, AI revolutionizes decision-making, empowering people with smarter, faster, and more informed choices.

From Best Actions to Human-Centered Decision-Making

Part of the misunderstanding surrounding decision-making stems from previous technological iterations, which were more prescriptive than empowering.

Traditional decision tools, while effective in some areas, often reinforce the perception of AI as merely a productivity booster.

Codified with rigid constraints, the best actions offered to humans are derived from predefined and limited options.

But that’s not how business works (or not for long). In any business environment, decision-makers operate under highly subjective and nuanced parameters.

A budgeted decision based on a limited set of options will invariably omit additional critical perspectives: customer experience, business risk, revenue, or any of the dozens of parameters that make up any scenario.

 

 Let’s take a manufacturing warranty claim as an example.

Traditional automation expects the same finite set of decision variables to determine whether a claim is approved or not.

In reality, each claim has dozens of individual nuances and interpretations, making it impossible to fit it into a precise set of twelve categories.

Worse still, next-best actions don’t empower humans, stifling their own understanding of context and decision-making expertise.

Human experts apply a significant amount of subjective nuance to make decisions in different areas.

Information, experiences, and situations influence how they weigh options and adapt decision-making.

In the manufacturing warranty claims example, the importance of contextual awareness makes it impossible to reduce decisions solely to standard operating procedures and rules.

The best answer for Customer A could be completely different from that for Customer B, even with identical surface data.

Instead, ICAs dynamically generate new alternatives based on evolving data patterns and contextual insights.

These systems don’t just provide answers: they empower employees to make their own decisions more quickly, from a wider range of curated options, with alternatives and trade-offs.

What truly distinguishes AIs is their ability to innovatively engage with decision-makers and learn to make better recommendations and choices.

AIs continually learn, evolve, and align with the organization, providing feedback loops between options and outcomes.

Every decision, and the choices that support it, become learning opportunities that drive a self-sustaining cycle of success.

Progressive Pathways to Reinvented Value Chains

In an environment of rapid change and disruption, AI’s ability to assist, enhance, and transform decision-making has never been more valuable.

The greatest value is realized when AI adapts to the person, enhancing decision-making with context, empathy, and relevance.

Rather than imposing a single interaction model on all users, AIs align with individual personalities, preferences, and work styles while maintaining consistency in the quality of underlying decisions.

Through progressive stages of maturity, AIs create new pathways to new ways of working and making decisions:

Assist: At this foundational level, AI provides assistance with specific tasks, with humans delegating tasks to AI capabilities for discrete functions. The relationship is transactional, with humans directing AI to perform well-defined tasks.

Augment: Here, AI evolves into a collaborative partner that works alongside humans, offering options and recommendations while facilitating human collaboration. The relationship becomes more balanced, with AI functioning as a trusted advisor with no vested interest beyond improving human performance.

Transform: At the highest level of maturity, AI orchestrates work across the entire value chain, optimizing not only individual performance but also enterprise-wide outcomes. This is where many misunderstandings arise. It’s not about replacing humans, but about reimagining how workflows and decisions are made to maximize collective outcomes.

People are still essential, but their way of working is orchestrated by systems to achieve optimal results. At the local level, each worker needs personalization.

However, the best option for an individual worker might not be the best option for the organization. At the global level, organizations need to ensure that individual optimizations don’t lead to suboptimization at the enterprise level.

True transformation occurs when AI orchestrates work in a way that maximizes enterprise-wide outcomes, not just individual productivity.

 

This framework represents an organizational progression.

Each stage builds on the previous one, creating a true human-centered intelligence system that orchestrates enterprise-level decisions.

As collective enterprise decision-making capabilities progressively improve, institutional knowledge expands as employees scale.

Institutional knowledge expands rapidly as employees make smarter, faster decisions through personalized improvement.

Restoration Specialist: A restoration expert uses fine motor skills, dexterity, visual cues, and physical instruments to create precise adjustments to fragile documents; AI and robotic systems cannot replicate this.

 

 

Building an Intelligent Decision Architecture

Designing the right decision environment is as crucial as the AI ​​itself.

Designing the contexts in which key strategic and operational decisions are made requires an optimal framework that enables both humans and machines to navigate increasingly complex business environments.

This is not trivial. Creating effective AI systems requires a different approach than traditional AI implementation.

Most approaches immediately focus on building a set of predefined components. In fact, they prioritize machines over people.

ICAs are so dependent on human factors, personality, and nuanced work patterns that the architecture itself becomes a smart choice. Building effective ICA systems requires intelligence to determine what capabilities are needed for specific knowledge work and how they should be orchestrated. It’s not about integrating technological components, but rather a thorough understanding of how work is done and where improvement can add value.

Every person has different communication and work styles, and effective improvement requires adapting to these preferences while still providing optimal guidance.

For ICAs to be successful, they must be customized not only for roles and responsibilities, but also for individual personalities and work styles.

What makes a person particularly effective in their role?

 

What would significantly improve their day-to-day?

How could the person’s experience scale across the organization?

This human-centric approach revolutionizes the traditional implementation model. Instead of forcing people to adapt to technology, technology adapts to people.

The Way Forward: From Point Solution to Cognitive Enterprise

Human-centered, AI-driven, and results-oriented design for the decision, not the tool.

Map crucial decisions in each role, capture the tacit reasoning behind them, and let those insights—not the latest model release—drive your roadmap.

 

Establish Dynamic Architectures: Assemble modular layers of data, models, and agents that learn from every decision, improve autonomously, and redeploy at enterprise scale without reconfiguring the stack.

Scale Institutional Wisdom: Incorporate expert intuition into intelligent decision architectures so every associate, new or veteran, acts with the company’s accumulated experience.

Govern for Dynamic Accountability: Shift from static controls to real-time guardrails that audit decisions, detect bias, and course-correct on the fly, keeping humans firmly in control of the «why.»

 

Closing Charge

The next decade won’t reward companies that simply automate faster; it will reward those that combine human judgment with artificial intelligence so that every decision is smarter than the last. Create the architecture, connect the learning loops, and enable your company to start thinking at the speed of your ambition.

 

 

 

Responsible Decision-Making with AI

The following contribution is from DecisionBrain, which defines itself as follows: At DecisionBrain, we know how difficult planning can be. Supply chain, manufacturing, logistics, workforce planning, and maintenance are just a few of the areas where we have come to appreciate the complexity and uniqueness of the entire organization.

 

  1. Introduction

Can artificial intelligence (AI) make reliable and explainable decisions for businesses? Businesses and government agencies depend on accurate decision-making regarding customer interactions, product or service eligibility, workforce scheduling, and supply chain planning. To what extent does modern AI facilitate everyday decision-making, which is essential to the operations of every organization?

Modern Generative AI (GenAI) systems, such as Long Language Models (LLM), appear to offer insightful answers to decision questions, but they operate by leveraging statistical patterns rather than understanding meaning and applying formal reasoning.

This phenomenon, sometimes referred to as the «Stochastic Parrot» effect, underscores the limitations of relying on these models in high-stakes scenarios.

 

Industries ranging from healthcare to finance are increasingly relying on AI for decision-making, highlighting the urgency of recognizing GenAI’s potential, as well as its limitations.

Techniques such as chain-of-thought induction (CoT) and generation-augmented-retrieval (GA) improve AI results but do not address the fundamental absence of genuine reasoning.

Hybrid approaches that combine GenAI with explicit decision-making frameworks, such as rule engines, optimization tools, and simulations, offer a path to more reliable results.

  1. Generative AI and Its Limitations in Autonomous Decision-Making

2.1 The «Stochastic Parrot» Dilemma

Generative AI language models work by predicting the next word in a sequence based on probabilities derived from the often massive training data. This probabilistic approach can produce human-like linguistic fluency and broad information across many domains, but it has an inherent weakness in autonomous decision-making.

Although LLMs may appear to reason through complex tasks, they actually perform advanced pattern matching rather than applying a grounded understanding of the world.

They excel at generating coherent answers by identifying linguistic and conceptual correlations in massive text corpora.

However, they lack the ability to verify factual accuracy or interpret context semantically by applying formal rules or searching for correct outcomes in a limited solution space.

This deficiency becomes especially concerning when addressing operational or strategic decisions that require deep domain knowledge, ethical considerations, or strict regulatory compliance.

2.2 Hallucinations and Deceptive Results

LLMs cannot verify the authenticity of their claims and sometimes fabricate nonexistent facts; Phenomena commonly known as «hallucinations,» but better known as «confabulation.»

These types of results can be dangerously misleading when applied to regulated sectors such as finance, public services, or healthcare, where even a small error can have significant consequences.

 

The refined style of GenAI content makes it difficult for non-expert readers to spot inaccuracies, especially if they lack contextual knowledge.

Since LLMs are designed to maximize the likelihood of producing coherent and engaging texts, they can confidently fabricate details in the absence of corroborating data.

Users may inadvertently rely on these misleading claims, increasing the risk of errors in crucial decisions.

2.3 Biases in Training Data

Generative AI, like all machine learning techniques, reflects and amplifies the biases inherent in its training set. Even before the advent of LLMs, companies like Amazon faced this problem when an ML-based hiring model perpetuated historical gender discrimination.

Efforts to clean problematic data or adjust model weights can mitigate biases to some extent, but many correlations remain deeply embedded in large training corpora.

Bias not only leads to discriminatory outcomes but also increases legal and reputational risks, especially in jurisdictions that enforce strict anti-discrimination laws.

Human oversight and explicit logical constraints play a crucial role in identifying, assessing, and correcting these hidden biases before they manifest in real-world decisions.

2.4 The Black Box Problem

GenAI engines, and machine learning models more generally, often operate as opaque «black boxes,» offering little insight into their internal reasoning. This opacity undermines accountability and trust, especially when AI systems shape critical organizational policies.

Regulators increasingly require that AI decisions be explainable and traceable; European Union legislation, such as the AI ​​Act, requires documented justifications supporting «high-risk» AI outputs.

Traditional machine learning methods, especially deep neural networks, generally resist direct interpretation because their internal weights don’t correspond to the concepts and metrics humans use to derive compelling explanations.

While Explainable AI (XAI) techniques and agentic workflows can highlight factors or trace high-level decision steps, they don’t transform a black box into a transparent system.

This challenge complicates both internal governance and external compliance, forcing companies to reconcile the promise of GenAI with the legal and ethical need for explainability.

On the other hand, explicit reasoning technologies, such as business rules or constraints, enable systems to provide sensible explanations based on the rules and ontologies used to describe them, making them highly accountable.

 

However, these systems lack the linguistic capabilities of LLMs. Perhaps a combination of both is a good approach? We’ll return to this topic shortly.

2.5 Over-reliance on AI

An over-reliance on GenAI can neglect important human insights, ethical reasoning, and context-specific knowledge.

While LLMs are adept at summarizing large volumes of text and identifying common patterns, they don’t automatically adapt to ongoing regulatory changes, evolving business needs, or cultural nuances.

In some cases, critical terms in contracts or policies require explicit modeling and meticulous version control. If these points are overlooked and delegated entirely to an LLM, errors with far-reaching implications can occur.

On the other hand, domain experts remain essential to provide situational awareness and ensure that AI outputs align with business objectives, legal constraints, and ethical considerations.

2.6 Change Management

The real world changes daily: laws and regulations are introduced or modified; new products enter the market; new competitors emerge, demanding changes in marketing and customer interaction.

AI models based on statistical analysis of the past are hampered when the present changes significantly; the examples used for training are no longer relevant, and generating new training data can take months, much longer than necessary to respond to changes in the environment.

In fact, if systems are automated using old models, it may even be impossible to find training examples that make correct decisions using the updated rules or situation.

In these cases, an explicit representation of rules and procedures in the real world is needed, along with strong governance.

  1. Improving AI Reasoning with Advanced Techniques

These limitations in Gen AI models have been observed since their introduction, and in fact, many are also present in all statistical or machine learning approaches to AI.

In the field of LLMs, several approaches have been suggested to increase the accuracy, reasoning power, and inference timeliness of Gen AI models. A brief summary of some of these approaches follows.

3.1 Chain of Thought (CoT) and ToT

Chain of Thought induction requires a model to describe the intermediate steps before reaching a conclusion, which could reduce errors by making the reasoning process more explicit.

ToT extends this idea by exploring multiple candidate chains in parallel, although the additional branching paths increase computational overhead.

While both CoT and ToT can help clarify model answers, they do not guarantee logical or factual correctness.

Because LLMs still rely on their original training data and core architecture, they can still produce flawed reasoning if the question involves novel constraints or complex logic not found in familiar patterns.

While these techniques significantly improve reasoning capabilities, they come at a cost. Recursively calling an LLM, especially for CoT or ToT workflows, increases computational requirements, resulting in higher energy consumption and API usage costs.

It can also magnify the impact of hallucinations: by reinjecting the results of previous inferences into the LLM, any initial hallucinated results will simply be recursively integrated into the chain of thought.

3.2 Retrieval-Augmented Generation (RAG)

Retrieval-Augmented Generation addresses the problem of outdated or incomplete model knowledge by incorporating external data sources at inference time.

 

Rather than relying solely on pre-trained parameters, the application calling the LLM first queries a database, search engine, or knowledge base to expand its context before providing a more complex prompt to the LLM.

This can mitigate certain hallucinations by providing up-to-date or domain-specific data in the prompt.

However, if the retrieval system fails to locate all relevant records or distinguish critical content from superfluous text, the AI ​​could generate incomplete or incorrect answers. Therefore, an effective RAG implementation requires robust metadata, well-managed repositories, and robust indexing or search algorithms.

3.3 Function Calls and Agentic Workflows

Function calls and agentic workflows integrate LLMs with external APIs or directed graphs of specialized tasks.

These capabilities allow AI systems to query up-to-date services, perform specific calculations, or verify specific data.

For example, an LLM can obtain current weather information or real-time stock prices, rather than relying on outdated training data.

While agentic workflows automate multi-step processes by delegating tasks among different «nodes» (e.g., rules engines, LLM calls, or domain-specific applications), proper governance is crucial.

If constraints are not well-defined or if too much autonomy is granted to the AI, unexpected results can arise, raising security or liability concerns.

This capability is exciting: the ability to identify external services to call opens up a very powerful potential approach for creating hybrid systems that incorporate true reasoning engines into the flow of an LLM-based application to combine the linguistic prowess of LLMs with more accurate, explainable, and manageable decision-making technology such as business rules and optimization, as we will discuss below.

  1. Hybrid AI Approaches for Trustworthy Decision-Making

4.1 Integrating Generative AI with Decision Models

 

Because GenAI alone struggles with factual reliability, explainability, and context sensitivity, many organizations adopt hybrid strategies that combine LLM capabilities with structured logic engines, optimization tools, or simulation systems.

In this architecture, GenAI excels at analyzing unstructured data, for example, by summarizing legal documents or extracting relevant policy references. Once the data is structured, explicit models such as Business Rules Management Systems (BRMS) and Mathematical Optimization engines apply verified constraints or performance criteria. This hybrid approach prevents the «stochastic» results of an LLM from overriding legal or ethical imperatives and allows subject matter experts to adapt the rules as needed to respond to regulatory changes, corporate priorities, or new market conditions.

4.2 Illustrative Example: Insurance Claims

An insurance company could use an LLM to analyze water damage incident reports, extracting details such as policy numbers, timeframes, and damage descriptions.

The LLM then passes these structured findings to a rules engine that determines eligibility and recommended payouts based on contract terms, claimant history, and local regulations.

If the LLM attempts to introduce superfluous or speculative information (e.g., imagining nonexistent coverage), the rules engine will override it, ensuring the final decision adheres to verified guidelines.

By combining human oversight and explicit business rules with GenAI’s data mining capabilities, insurers enjoy faster claims processing without sacrificing consistency, explainability, or regulatory compliance.

When managing customer interactions related to insurance claims, a hybrid approach can also be helpful.

 

An LLM can interpret customer emails or messages to determine intent and extract key insights.

Tool management and RAG can enrich this information with data from a central insurance system and CRM to provide complete context.

This data can also include customer segmentation and scoring information generated by machine learning models.

This contextual data can be incorporated into a rules-based system to ensure compliance with corporate customer management policies, including rules on differentiated management for different customer segments, thereby ensuring responsiveness and profitability.

 

 

Ethical Implications of AI-Powered Decision-Making

The following contribution is from the RSM portal, which defines itself as follows: At RSM, we help our clients overcome new challenges, navigate change, and adapt to thrive. By working together, generating deep insights, combining cutting-edge technology and practical experience, we deliver unparalleled understanding and empowering confidence. For a changing world. For the future. For everyone.

RSM is a powerful network of audit, tax, and consulting experts with offices around the world. As an integrated team, we share skills, knowledge, and resources, along with a client-centric approach based on a deep understanding of your business. This is how we empower you to move forward with confidence and reach your full potential.

Author: Jason Yau. Partner & Chief Technology Officer. Hong Kong

 

 

Key Takeaways:

  1. AI-powered decision-making systems, while offering powerful capabilities and efficiency improvements, carry significant risks when organizations rely excessively on automated processes without proper safeguards.

The inherent complexity of modern AI systems poses challenges for transparency and accountability, as internal decision-making processes are often difficult to interpret or explain.

Successful AI implementation requires robust human oversight frameworks, including clear guidelines and review mechanisms, to ensure the technology serves as a complementary tool, not a substitute for human judgment.

You may or may not recall that at the end of 2021, online real estate giant Zillow exited the online shopping market, laying off nearly 25% of its workforce and plummeting its net worth. The company began 2021 with a peak valuation of $48 billion and ended the year with $14 billion.

Significant losses and disruptions that can ultimately be attributed to artificial intelligence (AI) algorithms that made very poor decisions. In this case, an AI system was undervaluing properties to such an extent that Zillow was wasting cash at an unprecedented rate, and its value has not recovered since.

While it is true that positions are being eliminated, a 2020 WEF report, prior to ChatGPT, suggests that despite 85 million positions being eliminated, 97 million new jobs are expected to emerge, specifically in fields such as data science, AI development and monitoring, and AI and human collaboration positions.

 

 

AI decision-making is not an entirely new concept,

but it is one that has gained significant importance in recent years, especially with its sudden surge in popularity and use cases.

As AI increasingly influences decisions, whether human or fully autonomous, the ethical implications of the technology raise more than simple errors.

Where should AI be used? Where do we draw the line?

AI’s ability to make decisions for us addresses fundamental questions such as human autonomy, fairness, bias, and our trust as a species in an increasingly automated world that will continue to shape our lives.

AI is undoubtedly a revolutionary technology that offers unparalleled opportunities, and if used correctly, it should be used where appropriate. However, the real question is: how much control are we willing to give up?

 

The Main Ethical Concerns of AI Decision-Making

The ethical debate surrounding AI has been a hot topic since its inception. There is no universal solution or single right way to implement it.

At its core, it is a technology that must be approached with pragmatism and care.

However, this does not answer the burning questions about its ability to make decisions, especially decisions that can impact our society:

Who bears responsibility when AI makes mistakes?

How do we protect individual privacy when training these systems with large data sets?

The delicate balance between advancing AI capabilities and maintaining human autonomy requires careful consideration of accountability frameworks and fair implementation practices.

Perhaps above all, it requires a deep understanding of the ethical implications of its use. Let’s therefore analyze a couple of the main concerns.

The Transparency Paradox

At the heart of AI ethics lies a fundamental and unfortunate paradox: the more powerful and complex AI systems become, the less transparent their decision-making processes can be to human oversight.

This is the fundamental concept underlying what is known as the «black box» problem.

This problem is especially prevalent in deep learning AI models that use complex neural networks where data is processed through multiple layers of interconnected nodes, and inputs are mapped to «tokens» that organize the data hierarchically.

The problem is that, in many cases, no one can determine how an AI arrives at its conclusions. We can know the input and the output; everything in between is a mystery.

The inability to see how an AI model arrives at a given decision raises numerous ethical and technical issues. Technically, this means that if there is a problem with an AI’s output, it is difficult to know where things are going wrong and what to fix.

Problems in standard software can be more easily identified and fixed with a patch; complex AI is a different matter. Ethically, this raises the question: how can we fully trust a system we don’t understand, especially when it comes to decisions that can drastically affect someone’s life?

Algorithmic Bias and Fairness

Algorithmic bias can have a significantly negative impact on the people for whom decisions are made.

There have been cases of algorithmic discrimination in which people of color have been denied medical care or men have been favored over women for jobs.

Even if unintentional, these biases perpetuate existing inequalities and, if left unchecked and untrusted, could have, and have, very adverse consequences for people.

Biases in AI models can come from multiple sources: historical datasets that reflect contemporary societal biases, underrepresentation on the development teams that create AI models, and human error due to issues such as flawed testing protocols. With AI decision-making being used and prospectively introduced in more areas, such as housing (as discussed at the outset), loan approvals, and criminal risk assessments, ensuring integrity and fairness is crucial.

However, while algorithmic bias has been a problem in the past and will undoubtedly continue to crop up occasionally, it is not an issue that has gone unnoticed.

The black box problem may make it difficult to solve.

However, with continuous monitoring, increased human oversight, and bias detection tools, AI models can become more aware of biases in the future.

 

The Data Problem

Many would argue that this is the biggest problem, and they wouldn’t necessarily be wrong.

The biggest risk, especially for organizations looking to use AI, lies in how AI models collect, use, and learn from data.

Major companies such as Apple, Verizon, and iHeartRadio have banned the use of models like ChatGPT for fear of data tampering; Samsung, in particular, restricted the chatbot’s use after discovering that employees had entered sensitive code into it.

The risk of inadvertent disclosure of corporate trade secrets represents a critical ethical boundary.

AI systems could inadvertently reveal confidential information and create significant legal and competitive risks for organizations.

For organizations looking to incorporate elements of AI, especially generative AI, it is critical to have sophisticated data governance strategies that protect both individual privacy and corporate intellectual assets.

At a broader societal level, generative AI models are trained with both input data and data scraped from the internet, meaning they have the potential to, and often do, memorize all data supplied to them, regardless of its sensitivity.

When combined with biometric data, expressions, and other personally identifiable information (such as financial records and credit scores), individuals can be involuntarily exposed, with intimate personal characteristics transformed into computational data points without their express consent.

An individual’s economic opportunities could be drastically limited by opaque computational assessments that reduce human complexity to numerical scores.

AI’s Best Protection: Human Oversight

As mentioned, continuous monitoring and human oversight have become crucial factors in keeping AI in check.

The key to more effective, risk-averse, and beneficial AI decision-making lies in ensuring the implementation of processes with human intervention. These frameworks maintain human judgment at critical decision points while leveraging AI’s processing capabilities.

Businesses benefit greatly from what AI can offer, but mitigating risk requires human intervention.

When implementing AI, it is crucial to conduct a cost-benefit analysis. Full automation is most appropriate if a decision-making function has little or no long-term impact.

Organizations looking to implement AI should establish a set of best practices for their employees, a set that balances AI efficiency with business accountability.

Oversight thresholds are a good example. In this case, AI decisions that exceed certain risk levels automatically trigger human review. Appeal mechanisms should be established for parties affected by decisions made with AI. To ensure optimal operation, regular audits of system performance should be conducted.

A robust AI governance framework can also complement human oversight for greater protection.

At the governance level, a systematic approach that integrates structured accountability mechanisms, ethical design principles, continuous monitoring, and cross-functional oversight will help mitigate risks from the outset.

For particularly risk-averse organizations, creating multi-tiered review processes, integrating ethical considerations directly into all AI-based architectures, and establishing adaptive governance models can ensure that AI systems remain fundamentally subject to human judgment.

The framework should mandate clear chains of accountability and develop transparent audit trails for added security.

The path forward requires enthusiasm, but also caution.

AI is an incredibly powerful tool with almost unlimited application potential. It is easy to get caught up in the excitement of this technology and move forward without performing essential due diligence.

Organizations implementing AI systems must invest in robust oversight mechanisms with clear guidelines for implementation to ensure risks are mitigated. As mentioned, AI is a great tool, but a tool nonetheless.

A hammer is only as safe or dangerous as the person wielding it. Furthermore, laws and policies have been enacted or are being implemented around the world, many of them to provide additional protection against AI misuse.

As a society, the responsibility falls on everyone, from the development to the implementation of AI, to ensure it serves the common good before the gap between technological capability and ethical framework becomes unbridgeable. As we move toward exciting possibilities, it is critical to proceed with caution.

 

This information has been prepared by OUR EDITORIAL STAFF