Understanding America’s AI Action Plan And AI Copyright Issues

This week the White House issued a dynamic, pro-business AI Action Plan focused on “winning the race” for AI supremacy. As I discuss in today’s podcast, this 28 page document details the Government’s comprehensive policies to support infrastructure-building, energy production, and investments in AI across government, the military, commercial businesses, and our own careers.

In many ways this document represents a huge step forward in the government’s understanding of this technology and our focus on accelerating, licensing, and distributing American AI technology around the world. For example it promotes the use of open-source models, the sale of American chips and software to China and other countries, and the need to invest in workforce training and workforce transformation.

I encourage you to read the document because it raises almost all the issues we face as consumers, workers, businesses, and technology investors.

What’s New: How The World Has Changed 

Only a few years ago Sam Altman testified in congress (May of 2023) stating that “development of superhuman intelligence is one of the biggest threats to humanity.” In fact he urged Congress to examine regulation to protect workers, intellectual property, information security, and more.

Well the world has flipped: today the focus is on US competitiveness so most of the policy focuses on reducing regulation and speeding up the buildout of data centers, energy, and AI software businesses.

At the same time this happened Meta won a lawsuit against 13 authors, enabling Meta to use these authors’ works to train its AI models. The concept here, which is discussed briefly in the AI Action Plan and the President’s speech, is that AI models can “learn” from copyrighted materials (books, music, movies, art, etc.) but not “reproduce” them. So we are entering a legal regime where we trust AI and AI vendors to “do the right thing” with all the information they collect.

In many ways I’m happy to see that the US government now has a team of people (David Sacks, and others) who deeply understand this industry, and they are regularly meeting with and supporting AI companies to help them compete and grow their businesses.

And I’m also thrilled to see that the Department of Labor is actively promoting policies to increase AI education, provide tax breaks for AI reskilling, and invest in research to understand the impact of AI on jobs. This is healthy for our economy, our businesses, and our ongoing efforts to transform our companies in “The Age of the Superworker.”

Where Is This Going?

While all these policies make perfect economic sense and they position us for a competitive global AI market, there are lots of issues to work out.

  • Concerns about AI bias in recruiting continue, and a class-action lawsuit against Workday just moved ahead with a potential of 1.1 billion job applications. So the jury is out, so to speak, on how we legislate, manage, and monitor bias in recruitment.
  • Dozens of lawsuits by media companies, authors, artists, and musicians are still going on. (Read a great Wired article for visual details.) The judge that settled the Meta lawsuit clearly stated that copyright law remains in force, but in that case the litigants did not prove that Meta’s AI was “copying” information.
  • Companies worry about AI systems collecting their proprietary information. Most contracts state that AI tools they buy must not “train” on the company’s internal information, so vendors (including us) are agreeing not to train any AI systems on the information they index and use. (Our Galileo® license also explicitly prohibits companies from training their internal systems on our IP.)
  • The AI Action Plan states that the US Government does not want “biased” systems either, stating that “AI procured by the Federal government should objectively reflects truth rather than social engineering agendas…  and should eliminate references to misinformation, Diversity, Equity, and Inclusion, and climate change.” (This may be difficult or impossible to even measure, and in some ways it conflicts with the position of liberating vendors from IP licensing to train their models.)

So the issue of how these models are trained remains tricky.

Let’s face it: unlike humans, AI engines we use are not “ethical, careful, or respectful” unless we train them to behave this way.

So in this new world of “pedal to the metal” AI investment and growth, it’s up to us as vendors, technologists, and users to make sure we fill our systems with trusted data, educate our users to validate information, and use “human judgement” wherever possible.

We work with most of the major AI providers (and we are one ourselves) and every one of them is working as hard as they can to eliminate bias and provide high value solutions. (Galileo, for example, is only trained on our content and information from our trusted content partners.)

That said, these are rapidly changing systems so we must understand these tools, do business with trusted vendors, and educate employees on their limitations so we can safely grow into the Superworker Companies of the future.

Additional Information

Why AI Harm To Jobs and Humanity are Vastly Over-Hyped

Prompting is Programming: We’re All Software Engineers Now!

No, Entry Level Jobs Are Not Going Away.

The Four “New SoftSkills” We Need To Thrive In The Age of AI

CNBC Interview with David Sacks re: US AI Strategy

Get Galileo®, The AI Agent Exclusively Designed for Everything HR

 

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *