top of page

The part of AI we're not talking about enough

Updated: Apr 20

Edition 48: April 17, 2026


Nearly every conversation about artificial intelligence leads to the same question:

 

What happens to people?

 

To the way we work, make decisions, and earn a living.

 

Billions of dollars are now flowing into artificial intelligence, most of it directed toward building faster, more capable systems. A far smaller share is focused on the harder question of how those systems should behave.

 

The nature of work has always shifted with new tools. The printing press, the tractor, the computer, each one changed not just how work was done, but where it happened and who did it.

 

Before offices and factory floors, there were scribes. The printing press allowed words to travel farther and faster, reducing the need to copy texts by hand.

 

Over time, labor moved from fields to factories, from factories to offices, and eventually onto screens that can be carried almost anywhere.

 

The land remained. The work moved.


Painting of three peasant women gleaning in a field

That pattern is repeating now.

 

The question of what work looks like next extends across a wide range of professions, from coders and paralegals to analysts, recruiters, client service teams, financial professionals, and the middle managers who keep large organizations moving.

 

Analysis across many fields has already shifted from spreadsheets to algorithmic models and AI systems capable of scanning enormous volumes of information almost instantly.

 

The shift is no longer just about speed. It is about who, or what, is doing the thinking.

 

As that line begins to blur, a more practical question follows, not just what these systems can do, but how they are being used.

Hands typing on a laptop on a dock by water

What makes this moment feel unsettled is not just the technology itself, but the way it has been presented and amplified.

 

For years, many of the companies building these tools have described them in sweeping terms: transformative, disruptive, society-changing, even potentially dangerous.

 

That framing has shaped the conversation.

 

What happens when these tools fall into the wrong hands?

 

The concern is not abstract. It reflects the scale at which these systems now operate, from generating convincing misinformation and deepfakes to enabling automated fraud, cyberattacks, and the rapid manipulation of public opinion.The concern is not abstract. It reflects the scale at which these systems now operate, from generating convincing misinformation and deepfakes to enabling automated fraud, cyberattacks, and the rapid manipulation of public opinion.

 

And that is where the imbalance begins to matter, the gap between advancing capability and control.


These systems are advancing quickly, taking on tasks that once required time, training, and judgment.


The rules around them are still being defined.

Cyberattack targets 2025: Manufacturing, Healthcare, Tech leads
These figures are approximate industry-level targeting rates compiled from multiple 2025 cybersecurity reports. Because the data comes from different sources and methodologies, the percentages are intended for relative comparison and do not total 100 percent.

However, the positive possibilities are equally real. In medicine, AI is helping researchers identify patterns in scans, analyze genetic data, and accelerate the search for new treatments. In science, it is uncovering signals in vast datasets that would otherwise go unnoticed.

 

Not only what AI may replace, but what it may help us discover sooner.

 

A treatment. A distant world. A clearer understanding of our own.

 

Still, beneath all of this lies a deeper question:

 

What should children be preparing to learn?

What kinds of work will remain valuable?

 

And how do we preserve truth, authenticity, and trust in a world where information can be generated at extraordinary speed?

Child drawing at desk, alphabet on board behind her

While exact figures are difficult to obtain, available data suggests that less than 10 percent of AI spending is directed toward safety, governance, and misuse prevention, with the overwhelming majority focused on expanding capability.

 

That gap, or at least the lack of transparency around it, is one reason the issue carries real consequences.

 

History suggests that humanity adapts. The work moves. But this moment feels different because the destination is less clear.

 

The printing press expanded knowledge. Machines reshaped physical labor. Computers reorganized office work.

 

AI may accelerate discovery in medicine, science, and space in extraordinary ways. It may also challenge long-standing assumptions about careers, education, truth, and purpose.

 

No one knows exactly where this transition leads.

 

For now, the most realistic answer may be uncertainty.

 

But uncertainty is not the same as helplessness.

 

The future will be shaped not only by the technology itself, but by the decisions that take shape around it, in schools, workplaces, communities, and in the broader conversation about what we want these tools to do, and what limits we choose to place on them.

 

For readers interested in learning more, Harvard and MIT, right in our own backyard, along with Stanford’s Human-Centered AI Institute, Oxford’s Institute for Ethics in AI, and UNESCO’s global AI ethics framework, offer thoughtful work on the future of work, education, ethics, and the broader societal implications of artificial intelligence.

Sculpture of Aristotle
Aristotle, whose work on ethics and human judgment still informs how we think about responsibility, choices, and the use of powerful tools.
Very Cool Facts logo with Sherlock Holmes silhouette

If this made you think, you’ll find more at VeryCoolFacts.com


Share with a friend, follow along online, or subscribe for the next edition.


Every visit helps support science, education, health, and ideas that make the world better.


The facts only get cooler from here.


 
 
 

Comments


bottom of page