Over the past few years, we have seen the rise of new types of AI, such as generation and agents. How do you see AI continuing to evolve since 2025?
AI is AI. Terminology such as “generating” and “agent” can help simplify technology for the public, but it can also be misleading. These features, such as Natural Language Generation (NLG), have been around for a while and are just part of a much broader AI toolbox.
As hardware continues to improve, AI will become more competent, more specialized and deeper embedded in our daily lives, just like how the Internet has gradually become a fundamental technology. A strong example is medical AI, which is rapidly emerging as a new standard of care. There were a few early adopters, but we can see a wave of fast followers. Patients are beginning to expect AI-driven capabilities when choosing doctors, hospitals and insurance providers. Physician acceptance has increased from about 35% in 2019 to about 70% today, an important cultural change.
Under the Trump administration, many market participants are hoping that regulations on AI will change, encouraging the US to take a slower approach than European regulators. How do you think regulations will change during this administration?
This administration appears to be practical and supportive of American businesses. Overregulation risks slowing innovation, especially with intense AI competition from China and Russia. I hope that the administration will support US-based AI companies like Delorean AI as a strategic asset.
The European Union’s regulatory approach has curtailed its own technical sector in many ways. The largest American tech companies face significant regulatory headwinds in Europe, and the region’s AI industry is struggling to stay competitive worldwide. It should serve as a warning story for us.
Many concerns labeled “AI Ethics” are already covered by existing data privacy laws. Government agencies should focus on implementing what is already in place rather than creating new, overlapping regulations.
Finally, I highly recommend that you provide guidance from real practitioners who build and use AI every day, rather than relying solely on commentators and academics who may be removed from the real application of the technology.
When thinking about global AI regulations, how can you ensure that regulations do not hinder innovation and growth?
By its nature, regulations tend to hinder innovation and growth. However, I believe that the necessary basic guardrails have already been established through existing data methods. Enforcement should be the focus, not expansion.
We encourage the US government to actively support domestic AI companies in several key areas.
- Supply Chain Security: Make sure your hardware has the materials (rare earth, chips, and servers).
- IP Protection: Protects America’s innovation. If a foreign party is engaged in IP theft, a US-based representative should be held responsible.
- R&D Incentive: Current R&D tax credits are insufficient. AI innovation requires more meaningful incentives.
- Human Resources Strategy: In the short term, we will expand our H1B visa. In the long run, STEM education needs to be strengthened and ensure that universities are AI ready.
Finally, it needs to be easier for government agencies and private companies to adopt AI tools. This is how we stay competitive.
In you Recent merchant discussionsas AI continues to be developed, it called for the need for hardware and increased server capacity. What are your expectations for storage growth next year and the next decade?
The only honest answer is: Yes, growth will be exponential.
From a national security and economic perspective, access to the raw materials and skilled workers needed to build and operate chip and server infrastructures must be ensured. My colleague and I have already been exploring places that provide the power capacity needed to host these farms.
This demand provides a compelling opportunity to integrate renewable energy sources into AI infrastructure. For example, older factory sites in New England can be revitalized using hydroelectric power. Sustainable growth in this sector has enormous potential.
He also said that AI itself is not biased. Can you explain in detail how companies can guarantee that the dataset used to create AI models is not biased?
That’s right, AI itself is essentially unbiased as a machine. Bias arises from the data being trained. And this is where things get complicated.
First, companies should regularly audit their models to ensure there are no biases related to legally protected classes. There are already regulations that require this. Second, well-trained scientists who stick to scientific methods understand the importance of designing a balanced dataset from the start.
It is also important that clients ask the vendor’s correct questions. Transparency in model development and training data is important.
However, data can reflect an inherently homogenous population. For example, models trained with data from Iceland, where populations are relatively uniform, may not work well when applied to diverse regions such as Orlando. This is not a model bias, but a mismatch between training of data and application contexts.
What can businesses and policymakers do today to prepare for the next wave of AI innovation?
Companies need to invest in AI literacy at the leadership level. In many cases, CEOs see them delegate AI decisions to CIOs who are AI experts but not AI. It is a serious inconsistency. We need decision makers to understand the unique nature of AI technology.
Also, there is no need to reinvent the wheels. Buy a proven AI product. Building custom solutions in-house can be meaningless for some industries, such as healthcare where AI is not a core competency.
For policymakers, it is important to seek input from key sources, theorists and strategists, as well as those who are building and using AI. Real-world practitioners provide the most grounded and practical insights. And most importantly, policymakers must focus on enabling and nurturing the US AI industry, rather than overregulating it.