Over the past few years, we have seen the rise of new types of AI, such as generation and agents. How do you see AI continuing to evolve since 2025?
Technologies such as Generator AI and Agent AI/Agent Workflows are new and popular, but have been applied in a variety of ways over the years. I think what we’re looking at is both a broader exposure and new development of these technologies, and an open source tool set that makes them more accessible. Generator AI has been around for decades, for example, but new transformer technologies and computational capabilities make GEN AI easier and more attractive.
AI technology is constantly evolving. As for 2025 and beyond, I think we will continue to see complex algorithms that once reserved for doctoral and professional computational scientists, in the hands of more Quotidian practitioners. This fuels experimental flywheel, proof of concept, and large demand for interpretable AI methods and enterprise-level AI capabilities in ethical AI testing. These features are crucial in enabling algorithms to mature into enterprise-grade solutions. Using interpretable and ethical AI allows more organizations to enter the “golden age of AI” and these incredible technologies can be used safely with responsible AI rails.
In you TradeTalks Interviewyou said that companies need to follow the standards for developing AI. How should companies define their criteria?
To define responsible AI standards, you must first investigate the degree of AI maturity of your organization. The survey questions should include:
- Are there any Chief Analysis or AI Officers responsible for directing AI development and key development?
- Are you organized as a business/product team with separate AI teams reporting to each business unit?
- Or does the organization use AI only with specialized AI research teams?
- Or have you just started your AI journey?
It is important to understand the opinions of all stakeholders and ensure that they are heard. This process requires determining existing AI expertise and where general approaches exist, and where differences exist in algorithms and practices. While this will encourage open discussion, businesses still need to arrive at a single standard AI approach. Highlander’s Principles. For businesses that don’t have AI practices to leverage, many organizations are happy to share their approach to jump-start you.
How can companies ensure their standards can be adapted to evolving regulations?
The power of a company’s AI standard is that instead of managing dozens, hundreds or even thousands of AI models to ensure they meet regulatory thresholds, instead of managing a single standard, they can openly discuss with regulators, obtain inputs and evolve.
Tools like Blockchain It implements current standards and helps practitioners meet model governance requirements. In doing so, these experts will carve out more time to focus on innovation, find new, more effective ways to meet regulations, and evolve standards based on new regulations. Again, this can be achieved through a single model standard vehicle. It’s about evaluating a data scientist individually about 10 or even thousands of AI projects in your organization. Once you have decided how to change and update standards, you can consistently implement and manage all your projects and align data scientists with meeting regulatory requirements across a large number of projects.
Regulations do you expect regulations around AI to change during this management?
Some believe that regulations limit innovation, but I think that regulations create sparks that stimulate innovative solutions. For example, the Deepseek-China development team was constrained by low-performance GPUs. They had to innovate and innovate hard to produce performance-enabled, viable LLM competitor models at a much lower cost. So, while there may be fewer AI regulations on current management, this does not mean that proactive and creative organizations will not strive to meet AI objectives with safe and responsible AI.
You wrote Blog February, about what ethical AI is and identifying hidden biases. Can you explain in detail how companies find hidden biases within their datasets?
What makes AI amazing is the use of machine learning in many AI applications. This is the science of algorithms that find solutions that are not humanly defined. This feature is essentially powerful because these algorithms can explore relationships between inputs that humans do not expect as predictions. This is what makes machine learning superhuman. However, machine learning raises a double-edged sword. It provides both more predictive power and accuracy, but often in ways that humans cannot understand, and in ways that machine learning models find proxies for protection groups. The latter can effectively propagate mass bias.
To find hidden biases, data scientists can do two things: First, we use interpretable machine learning algorithms. This exposes relationships between variables learned through machine learning for human inspection. Second, they can use automated bias testing. This allows us to tap interpretable machine learning algorithms to constrain the complexity of data relationships, allowing humans to still interpret them, automate bias acceptance criteria testing, and question datasets to protect against bias. This helps prevent data scientists from unconsciously folding bias into the model, thus continuing to propagate bias on a large scale.
What can businesses do today to prepare for the next wave of AI innovation?
First and foremost, you need to make sure you are following new AI developments all the time. Next, consider the business problems needed to solve, and whether you are effectively waiting for new AI innovations and the ability to do so.
In reality, AI innovation can become a hammer thinking it’s all nails. If you are fully solving business problems today in the form of AI or other methods, preparing for that next wave means ensuring you don’t get caught up in chasing all new AI FADs and developments. If you have large, unresolved business needs that align with the promise of new AI innovation, build AI staff or be prepared to work with vendors who specialize in that particular AI innovation. But for me, the best way to prepare is to understand when it’s the right time to jump. Diving into any AI innovation that occurs can be counterproductive and can hurt business outcomes in the short term.