Technology

AI Summit 2025: 10 Takeaways And Some Unanswered Questions

Right now, our relationship with AI is like one where hard issues are always put off. That never ends well.

I walked into the AI Summit with hard questions about industry maturity, infrastructure challenges, and implementation realities. My previous coverage explored some of these issues in my initial fireside chat analysis and pre-Summit post. Here’s what I found — and what I didn’t find — across 10 key takeaways.

My 10 Takeaways

1. Did the show focus on the above challenges? Largely, no. There were a few sessions that mentioned the infrastructure, for example, but none talked about the energy risks. The only one that came close was one on sovereign AI but it was mainly a marketing pitch for a supercomputer.

2. As far as the verification and economic challenges, there was not much discussion. Rather the Summit felt like another AI love fest: everyone with different spoons stirring in the same old bowl. I get it, it’s a show for the vendors and by the vendors. But shouldn’t there have been at least a little more discussion of reality and the challenges?

3. One bright spot was the cybersecurity stage. Most of the presenters for this stage recognized the cybersecurity risks that sophisticated AI could pose. Like AI platforms that could adapt to the defenses and then attack again and again. One presenter mentioned the risks to the electric grid and infrastructure which could impact AI and slow its use.

4. I was particularly interested in hearing from the New York City presenters, who returned to discuss how the City uses AI to serve underserved communities. While progress has been made, much of their presentation focused on political threats to these programs under the incoming mayoral administration and in general. The fears were palpable and understandable.

5. There were several references to ambient AI — AI that works in the background without people realizing it. That’s where we are headed. But the focus should have been perhaps more on what the tools that AI supports could actually do and what problems those tools could now solve with AI’s help. Indeed, there appeared to be lots of interest in the use of AI in health care and in finance. Those sessions were the most well attended which perhaps reflects an interest, as mentioned above, in how AI could be applied practically to make other things work better.

6. There was a lot of interest in what AI will do to creative fields and how AI could be legitimately used by humans in a creative fashion. The prevailing view seemed to be that ideas come from humans, and the AI enables implementation and the fleshing out of those ideas in ways not previously possible. That’s good for now. But the issue really is as AI advances, what will it do to human creativity fields and the arts. The sessions looked less at this and what AI can do now.

7. A common and perhaps by now trite theme: AI with humans beats AI or humans alone. It’s the human in the loop argument. But rarely does anyone stop and ask what this means. What human? And where in the loop does the human fit today and tomorrow? I’m not faulting the AI Summit for not asking these questions, no other conference is either.

8. I have to talk about the facility, the Javits Center, in particular, since it will be the site of the legal tech conference, Legalweek. In 2026, that conference is moving from midtown Manhattan where it’s been for years.  The good? Javits is roomy, the exhibit space flows well. It’s not chopped on three floors like the Hilton space is.  The food at Javits is not as bad as some conference venues. There’s even a Starbucks onsite.

The bad? Many of the stages took place on the exhibit floor. For the most part the presentations there were hard to hear over the din of the rest of the floor. Whether Legalweek will resort to the having the same arrangement remains to be seen. But it’s distracting to say the least.

The ugly? It’s a walk from most hotels. There are few restaurants in the vicinity. There is no shopping nearby. All the things that made the midtown site attractive to many are far away from the Javits Center. That doesn’t particularly bother me since I go to several shows where walking some distance to get from place to place is necessary. But based on the feedback to this year’s ABA TechShow of which I was co-chair, which made a similar move to a similar venue, McCormick Place in Chicago, I predict Legalweek will hear a slew of complaints over this. And since it will in early March, it may be a cold walk as well.

9. As I have written, there were some useful perspectives from business leaders on the proper AI mindset. That mindset is much different than I see in legal. Part of that is by necessity: legal thrives on accuracy and confidence. But as one of my clients used to say, we always need to be careful we don’t spend too much time in the closet talking to ourselves. That’s the beauty of attending a conference like the AI Summit. But like most nonlegal conferences I attend, there were few, if any, legal professionals or lawyers in attendance at the Summit. There was little discussion of legal issues. It’s not good for legal to ignore what’s going on in the rest of the world. If nothing else, many of the exhibitors and attendees are likely clients of lawyers and law firms (or could be).  It might be good to hear what they are thinking.

10. Unlike some shows I have been to, I didn’t get the sense of a bro culture. People were energetic and enthusiastic about AI in general, and in particular, use cases. They are looking to push the envelope. That’s a good thing. That’s how we advance. It’s like another show I attend every year, CES: 75% of what’s talked about may never happen. But some things will. Or what’s talked about will inspire new things to happen and be developed. That’s the beauty of attending: fresh perspectives, new ways of thinking.

When Can We Talk?

My takeaways lead to some broader questions that need addressing. Let me hasten to say if I sound like I’m an AI curmudgeon of late, I’m not. I believe in AI and its vast opportunities.

But with those opportunities come challenges. Like how we can ensure we have the infrastructure to support all the things we want AI to do.  Like how AI will disrupt the workforce, eliminate jobs, and redefine what work means.

We get too many pithy concepts tossed around like truisms: AI won’t replace humans it will just replace humans that don’t use it. Or there will be other jobs to replace those lost to the technology. Maybe these things are true. But just mouthing them doesn’t make that so.

Perhaps shows like the AI Summit are not the place to talk openly about these things. But we need to have that discussion someplace: a first-time attendee asked me at the Summit if there were any conferences devoted to an examination of the hard issues. I thought for a moment and finally said, “None that I can think of.”

Right now, our relationship with AI is like one where hard issues are always put off. That never ends well.

It’s great to sing your team’s fight song and cheer. It’s even better when your team has the talent to meet the challenges it faces. Let’s recognize the difference between cheering and meeting the real AI challenges..


Stephen Embry is a lawyer, speaker, blogger, and writer. He publishes TechLaw Crossroads, a blog devoted to the examination of the tension between technology, the law, and the practice of law