In this episode of Paradigm, we sit down with Clark Alexander, a mathematician-turned-AI engineer who has helped solve problems across energy markets, supply chains, finance, and logistics. After teaching pure mathematics and mathematical physics, Clark crossed into industry where he applied everything from random matrix theory to Monte Carlo heuristics to real-world operational systems.

From freight optimization and load matching to renewable energy distribution and risk-aware logistics planning, Clark reveals how mathematical thinking changes once it leaves the classroom and collides with noisy, unpredictable environments. He breaks down why companies often measure the wrong things, how Goodhart’s Law quietly sabotages KPIs, and why real AI advantage often comes from survival rather than perfection.

“AI in logistics is tremendously underutilized, not because the math is hard, but because the data is messy. You can’t build AI on broken data pipelines.”
Clark Alexander, Chief AI Officer of Argentum AI

The conversation dives into:

  • Why fat-tail risks break naive automation
  • Why global optimum thinking fails in messy markets
  • Why Monte Carlo, genetic algorithms & annealing outperform in logistics
  • How to optimize for actionability instead of theoretical best
  • Why accountability is the bottleneck for AI automation
  • How data formats—not math—are the real blockers in logistics AI
  • Why “not dying” is the true success criterion for AI at scale

Clark also shares what happens when academics move into commerce, what companies consistently underestimate, and why human intuition will make a comeback as AI matures.

Packed with stories, heuristics, and operational lessons from the field, this episode is a rare glimpse into how mathematics, engineering, and logistics intersect in the age of AI and what it really takes to build systems that work in production, not just in theory.

About Paradigm

Paradigm is a podcast by Aubergine that explores transformative journeys where technology impacts human lives.Through candid conversations with visionary founders, product leaders, and innovators, we uncover stories of bringing ideas to life.

Authors

No items found.

Podcast Transcript

Episode
 - 
44
minutes

[00:10.4]
Excited to listen to cool stories and know your experience and learn from your experience, listen to your experience. And that is going to be a super exciting episode for our audience as well. So thank you for joining the podcast and thank you for giving your valuable timing, know valuable time to us.

[00:28.4]
So, I wanted to know more about you if you can quickly introduce yourself to our audience. You know, about your journey and how your mathematics, you know, from mathematics to AI logistics. How was the journey? If you can give, talk on that. Absolutely.

[00:46.7]
Shivani, thank you very much for having me. This, this is exciting. Before I tell you all the, the fun facts about my life, which is going to turn out quite boring, I want to tell you something about mathematicians. I'm a mathematician and I used to teach for a long time and I would always start the semester or the quarter, whatever I was teaching by saying, I grew up in Tennessee, so I call my, my students Baby Childs.

[01:09.2]
Okay, look, Baby Childs. Mathematicians have more crazy people than all the other sciences. We, we just have more crazy people. So what I want to do is tell you stories about crazy people and then why they were solving a problem that's way more interesting. And so I spent a lot of time actually telling about stories about mathem and then a little bit of time talking about what problem they're trying to solve.

[01:31.0]
So, we're going to try to delve into the, the slightly crazier side. I should have like puffed my hair a little more so it would look like much more mathematical and crazy. But, yeah, all that said, maybe we start, Where do we want to start with?

[01:46.1]
I guess we don't want to start at 1981. That's maybe too far back. Where would you like to start the story? Because where we start the story changes how the outcome goes. So you just tell me where you want to start it and we'll go from there. Yes. So, you know, I have seen and I have heard, you know, a lot of, about you.

[02:02.5]
And you have gone from, you know, academics, data science to AI and logistics. How has that mix shaped, you know, what you believe is possible when math meet the real world, systems? What made you have this transition from pure research kind of work to applying math for operational and business impact.

[02:23.2]
So would love to have a story about that? Yeah, absolutely. I mean, if we're going to be reductive about it, I'll just say money. I mean that, about. Well, about 10 years ago, I had my first son. And money, in academics is fine, but in Chicago, it's not enough to raise a family on.

[02:42.0]
So, essentially a friend of mine was, was working in a data science shop and he said, hey, Clark, I'm working on this thing. It's not exactly traditional data science. I need some mathematical firepower. You want to come work with me? And so he went to my wife and said, you need to convince Clark to go.

[02:57.6]
And she said, you need to go work with your friend Rami. And so I went and worked with my friend Rami. And that's how I got into industrial mathematics. But, interestingly, since I wasn't brought up in learning how to compute properly, so I started trying to see what mathematical physics things I could start applying.

[03:14.7]
My very first thing was, I was trying to use random matrix theory to determine when a data set had been corrupted. And I was like, what are you doing? I don't know, I don't know. But I came up with this interesting, this interesting method using random matrices and their eigenvalue distributions to determine if a data set had been corrupted.

[03:32.0]
Which is pretty interesting. I only used it a couple of times. And then people started cleaning up their data better. They didn't want me asking questions about why their eigenvalues were misshapen. So, that was an interesting transition. And as we kept going through problem after problem after problem, what kept happening to me was saying, oh, I've seen this problem before.

[03:54.8]
It was, oh, it was in whatever representation theory, algebraic geometry. Oh, this is a PDE's problem. Oh, this is just a probability problem. Oh, it's just eigenvalues. I kept seeing problems that I had solved before, but my friend Rami aforementioned had said, look, Clark, when you're getting into this business, you have to know engineers and physicists and economists and mathematicians and computer scientists kind of all solve the same problems, but their language is not the same.

[04:21.1]
You have to learn how to speak a different language. And so several years was just basically me learning how to speak that language and translate my mathematics into those languages. And that's, that's how I did it. And so the whole through line is just like, oh, I've seen this math problem before. Let me tell you how you solved it.

[04:39.4]
And that's, that's how I've made my way from industry to industry to industry. It's just kind of solving the same problems over and over and telling this, telling a new story to the, the people based on their particular needs. That surely is an impressive journey.

[04:56.4]
I must, it must have been, you know, difficult for you as well. But, you know, looking back to your years in physics and mathematics, what my mindset, from that, world still influences how you approach problems in AI and logistics, because across energy, finance, supply chain.

[05:19.2]
So how do you find common ground between these, you know, very different systems? Okay. I mean, there's two answers. The first one is eigenvalues, but that's not interesting and no one wants to hear that, so we'll skip that. I didn't say eigenvalues, just. Okay. What I used to tell my students and, and now I tell like my, my junior trainees and everyone else.

[05:39.0]
I would say there's really just two differences between you sitting there and me standing up here talking like an idiot. And they would always say money. And when I was teaching, I was like, that's not the difference. I have less at the moment. There's, there's two differences. The first is I have more experience Because I'm older.

[05:56.2]
And the second one, which is really the important one, is I'm not afraid to fail. I don't mind getting a problem wrong 8, 10, 12 times. And I end up telling them this story. You know, 15 years ago, I had a squirrel in my attic. I live in Chicago. And we had an old chimney was out of use and some squirrels had like, chewed through the wire and like gotten in the attic and it was noisy and they're like, what is going on?

[06:18.6]
So I went up to the attic and I tried to get a squirrel out and I got the mommy squirrel out and I found out the squirrel had, had baby squirrels in my attic. So I had to go up into the attic. The baby squirrels, if you've never tried to catch one, they're difficult to catch. I think I ended up going into my attic more than 30 times.

[06:37.0]
I think the final count was 37 times. I had to get up into the attic to go catch the last of the baby squirrels. And even though that wasn't mathematics, that was like a really very important, learning experience for me. That, failure is an option. Yeah.

[06:54.1]
When people say, like, you know, one thing that I've, I've learned to despise is people saying never give up, I say, that's terrible advice. That is the, the worst possible advice you could give anyone. The better advice is know when is a good time to quit. It's way more difficult to know when you're actually beaten and, and give it up and start something new. Right.

[07:14.5]
And failure is an option. And I, I kind of learned that the hard way. And the squirrels were helpful teachers trying, trying to teach me how to learn how to fail, gracefully. Also coaching youth soccer is something I've been doing for a long time. That also teaches you how to learn gracefully.

[07:33.9]
Yeah. So it's, it's, there's, you know, that's, that's the message that I think the through line is. I have more experience. But what I really have is more experience failing. That's, that's my big advantage is I've failed at doing more things. And so I've learned from those, those mistakes more than most people who just A never quit and B, don't like to fail.

[07:56.1]
That's awesome. You on this point that failure is an option. And that is really important to understand, you know, in life to be actually moving forward. I have also, you know, had conversations before with you and you mentioned that you know, companies measure the wrong thing, you know, so I, I want to tell what, what exactly you mean by that.

[08:24.3]
That what, what you know, are the common major traps, that they fall into, you know, you see in logistics or you know, in AI today or in any industry. So what are those traps of measuring the wrong thing? Yeah. So.

[08:40.3]
Okay, that is, that's an extremely important question and a good question. This, this again, life lessons where I, I ostensibly came to talk about mathematics, but I'm just going to give you life lessons and you can say that guy was awful. So, you know, when I first got into doing data science in business, I was working with people in real estate and I was working with people at a credit union.

[09:01.4]
And my, my former roommate who's a physicist, he, he also became a data scientist and machine learning engineer and said, you know, just keep in mind Goodheart's law. I said, what's that? He says, Goodhart's law says that when the measure becomes the target, it ceases to be a good measure. Right?

[09:19.8]
Because then you're just gaming the system to like maximize that one target. And in business you'll see, you'll see KPIs, which are somewhat mutually exclusive, right? You want to you want to maximize your dollar revenue per spend, but you also want to minimize your spend.

[09:37.5]
It's like, well, you want to maximize the dollar spent and minimize the dollar spent, right? They're mutually exclusive. You can't do both of those things at the Same time, you have to find some middle ground. So maybe that's not what you should be measuring. Right. And, and I've sort of made a specialty out of finding unique measures that maybe people weren't thinking about.

[09:55.0]
I'll give you a good one. It's not necessarily from logistics, but, I, I'm very interested in weightlifting, as a sport. Not that I'm. Not that I'm competitive in it, but you know, I did work with a, man who styles himself as the strongest man in logistics. Like it's like a 1200 pound squat, some, some like near world record thing.

[10:14.3]
Really interesting guy. But when I started thinking about it, you know, so. Well, what, what are you measuring here? What they're measuring in weightlifting is just raw. Well, this isn't powerlifting, it's just raw weight lifted. But they have weight classes, right? And in Olympic weightlifting, you're, you're measuring basically technique and how much you can lift.

[10:34.3]
And so I said, well that's, that's really interesting because what you really want to measure is how well you leverage yourself. Right, that's, that's really what they're trying to determine is who is the best at leveraging themselves. And so I started looking at this measure of the, the proper, proper one.

[10:53.7]
One interesting metric I was looking at when I was like going back and doing some statistics on Olympic weightlifting was, distance, the weight lifted. So that's your force. Right. So this is Newton meters. Right. In, in. So and now you want to divide by the lifter's body weight.

[11:10.0]
Right? Because that's Newtons. And then you want to, Yeah, so it's, it's height divided by. Yeah. Divided by lifters body weight. Right. So what this does is it favors, tall and skinny weightlifters who have good leverage. Right.

[11:25.7]
But what that measure is, is just meters. You're lifting like the amount of meters that you can leverage your body. Right. How much, how much leverage you can actually do. It's a meters measure. And when I started doing this, it turned out that all the gold medalists in Olympics and men and women's divisions had about that same leverage, Right.

[11:40.8]
It's somewhere around 13 meters is about like what a world class lift is. Women's, men's, lightweight, medium weight, heavyweight, they all have about that thing except for this one dude who they call the Pocket Hercules, who is very, very, very, very, very tiny. And his, his, he was, he lifted 18.

[11:58.8]
Yeah. So he's the winner in my, by my measure is that. But, you know, it's. It's this interesting measure. It's. I'm not measuring just weight because he obviously didn't lift the most amount of weight, but he was the best at leveraging his weight. Right. And when you look at, and this, this comes up mostly in like, the clean and jerk is the big overhead.

[12:15.8]
The deadlift, gets nothing because a lot of power lifters, they have a very wide stance, and so they lift the bar this Much. Right. And so even a world record deadlift, you're looking at about 3ft, 3 meters or a meter, actually 1. 1 meter versus like 12.

[12:33.4]
Right. This is a very interesting comparison, because it goes across sports and it gets. It's just one measure. Great. On, on that, you know, in, in terms of measuring things, there's a bunch of stories I like to collect about economic, incentives misaligned.

[12:48.7]
And I'll give you a couple of them. You probably have heard this one. We may have talked about it before about the British Raj trying to get rid of the cobras in Delhi. The British Raj came to Delhi, right. And they were, they saw that there were cobras, the snakes around Delhi, and they wanted to get rid of them. So what did they do? They put a bounty on, cobra tails. And the.

[13:07.4]
What happened? The, the cobra infestation got a lot worse because people started farming cobras for their tails. They were getting paid for the cobra tails, not getting paid for getting rid of cobras. So the, the incentive was the wrong thing. And so when you, when you're measuring the wrong thing, people are just going to game the system to do that one measure.

[13:29.1]
So, why you think that, you know, so many companies or, you know, you talk about any, any industry, right? Why do they. Caught up in that trap, you know, so is it because of understanding, the real problem or, you know, picking up the wrong things?

[13:46.0]
So how did you figure it out that these people are. Why these, you know, industries or, you know, leaders or companies falling in that trap? Yeah, they don't always. But I think there are two very clear, problems that have occurred to me.

[14:02.1]
The first is, the first and maybe, more obvious one is that this is what we're taught in our textbooks, right? You need to measure all these things, these key performance indicators. There's a lot of them, and you want to. And you want to maximize all them.

[14:17.9]
And you've been told that this is a good one and that's a good one, and that one's not so good. And so you, like, you try to look at them and you try to tell a story about how well your company is doing in performance based on cherry picking the ones that look good and sort of forgetting to say the ones that don't look good. Right. That's the, that's the easy one.

[14:33.6]
That's the easy one to point out. The other one is a little bit more subtle. But, at least culturally, in, in the tech world and in the science world, anywhere where there's a lot of money on the line, we as a culture have decided that people who make money do it through skill and people who lose money just got bad luck.

[14:54.4]
But that's not the case. That's not the case. The case is that you make money by good luck and you lose money by bad luck. Right? Luck and randomness is a lot bigger contributor than anyone wants to admit. Right? And so, I think a lot of people get caught up in this trap of like, oh, you're so intelligent. Why aren't you rich?

[15:09.8]
It's because intellect and wealth are not the same thing. They're not the same measure. Right? Some, some really, really super rich people are brilliant and some are dumb. Some are very, very unintelligent because they're not the same measure. Right. People. But we've, we've made this mistake societally that we think that, good luck, you know, good wealth is skill and, and poorness is bad luck. And it's just not.

[15:34.2]
What, what is the truth is hard work, and, and preparation sets you up to have a better chance of success. But there's no guarantee, no guarantee in this light. Right? And I think, I think that's part of the problem. When we're like cherry picking KPIs, we say, oh, the good ones, these were skill and the bad ones were just bad luck.

[15:51.5]
That's not the case. That's not the case. The good ones were good luck plus a little preparation, and the bad ones were just bad luck. So we, I think, want to discount the role of chance and randomness in like, all of our actual activities. We know this is not a black and white world that we live in.

[16:07.0]
There are car crashes and there are like flat tires and there are train wheels that fail and there's like, someone, someone was like hiding money in a stock account and just like offloaded it and the stock sank real quick because it was like, we don't control these things. Even if you're the smartest guy, or lady in the stock world.

[16:26.0]
And you've bought some options and someone does some rogue action, well, you're still, you're still in a bad way. And it wasn't skill, it was just bad luck. Well, that's fine. And if, if someone did it the way you did it, it wasn't skill, it was still just some rogue operants that did a thing and you just benefited from it.

[16:43.1]
Right. And so in business, what we see is there is one. There's one common trait amongst all the rich people in this world, only one. Right? And it's, and it's that they had an appetite for risk. They took a gamble and that gamble paid off. That is it.

[16:59.3]
There is no other common trait. Gender, race, religion, creed, height, weight, education, native. Nothing. No common thread amongst all of these things. The common thread is that they all took a gamble and it paid off.

[17:17.3]
So coming back to, you know, coming back to, since you have, you are, you know, you come from an engineered background, I want to know, simple approach, you know, that have outperformed for you. You know, the, in this engineering system that must have, you know, the simple approach that you've taken must have outperformed.

[17:38.9]
And if you can teach our audience about, you know, the concept of mathematics, in AI, you know, everyday AI leaders, how they can utilize it. And so what could that be one lesson or you know, the concept you would like to tell from your personal experience, which must have outperformed field.

[17:57.6]
Yeah, yeah. So, there's an interesting paper that came out October 7th. So we're recording this on what, the 14th? So exactly one week ago today. It was kind of a damning report on all these claimed quantum advantages. And in this report they said that some of the classical heuristic algorithms perform just as well, but you have to pick them correctly.

[18:19.0]
And when I was reading this, I'm like, oh, they're just talking about what I've done, right? Because the things that have worked well for me are these heuristic probabilistic evolutionary programs. One thing, there's two types of programs that I really like to write. Genetic algorithms and simulated annealing algorithms.

[18:36.9]
I use these all the time. And they've been wildly successful in solving problems, especially problems in noisy environments. Right? So it's Monte Carlo methods really, really overperform in ways that they shouldn't.

[18:55.3]
So Monte Carlo methods, evolutionary programming. So you turn a Monte Carlo method. I don't know if the listeners don't know Monte Carlo methods. Basically this is how casinos decided what they were going to make money. You just simulate a thing a billion times, a trillion times doesn't matter. You do a lot of simulations and you just do the statistics on the simulations and more often than not, that's going to be close to the exact right answer, right?

[19:21.6]
Now, if you, if you tune them for optimization, you do a whole bunch of optimizations and you just pick the best one. And that might not be your global best, but it's going to be pretty close and it's going to be business actionable intelligence. And that's really, really an important key, right? So I give this example a lot.

[19:37.3]
I talked to the Steve Dafferin, who was the CEO of Renaissance Technologies, who was, for many years the most successful trading firm in the world. And I said, look, with a simulated annealing algorithm, I can pick for you a stock portfolio of 20 stocks, right?

[19:54.1]
In the United States, we have some 7,000 liquid, equities you could choose from. So let's do the math. 7,000. Choose 20 Is how many? A lot. The answer is a lot. It's basically infinity. Computationally, it's infinity. So in a few seconds, I can pick for you a good portfolio which will on average return you was, let's call it 20%, right? Ish.

[20:17.4]
Or I could run my optimizers for days and days and days, three days, four days, five days, and I could get you 20.000386%. And he's like, well, obviously I take the 20 right off the bat, right? You're paying for that extra little tiny bit in time and volatility and computational power, right?

[20:36.2]
So it turns out as a negative. Like what, what, Monte Carlo and evolutionary optimizers do is they give you a good actionable result quickly. That has been wildly successful for me because I think there's, there is another problem that, that we face in, in data science and AI and optimization that we're looking for the global best.

[20:57.3]
But this completely discounts noise in the world. There's so much noise in our industry. We're not accounting for an electrical line going down. I'm not accounting for a strike at a port in South Korea. And we're not accounting for any of these things. We just think this is the best way to do it. Well, what if it's not? What if there's some noise in the world and there's some rogue operative that does a thing that doesn't match with your plans?

[21:16.6]
You didn't find the global optimum. You found something that's not even. Not optimum. It's not even actionable at this point. Right. So what's been wildly successful for me is using these heuristic optimizers to get something quickly which is actionable. That has been like my bread and butter across multiple industries.

[21:33.8]
Stock trading, electricity trading, supply chain optimizations, warehousing, vehicle routing, pricing, trucking lanes. I've, done this with, what else have I played in, image processing? This, this thing has worked for me everywhere.

[21:51.8]
Perfect. Nice to hear. So I said, I just. Well, you know, you recently, you were senior director of, you know, engineering Next Trucking. And I was reading through the platform, you know, so, you know, want to understand how was, how was it, how was it going?

[22:10.9]
What was the process? And you know, this, I heard few terms like smart load matching, you know, what technology, provides shippers with access of limitless, you know, capacity. So how was it, you know, if you can understand.

[22:25.9]
Let us understand the concept, of that would love. Right. I mean, absolutely. And just, just to be, completely clear, I have, I have since stopped working with Next Trucking and I've moved on to Argentum, AI, where I'm the chief AI Officer.

[22:42.7]
And I also, I work on the side with Energuice, the company I founded two years ago that works in like renewable energy optimization. But some of the through lines are the same, in logistics. This, this is just in general is one of the problems that we solved at Next Trucking is in the United States.

[22:59.3]
We have a bunch of seaports, right, where a lot of shipping is coming from China, it's coming from Mexico, coming from Brazil, right. East coast, West Coast. What generally happens is you go, send a driver to the port, he picks up a container, drives it in. Let's say he's driving to Bakersfield, California, dropping off furniture at a hotel, and he drives back to the port with an empty truck.

[23:22.3]
Right? So the goal, like the holy grail of logistics, is to minimize those empty miles. Right? Because who wins when you minimize the empty miles? You, you minimize, you reduce CO2 emissions. The driver gets paid for two jobs or three jobs, if you can do that. The shipper only has to pay for one full route instead of two full routes, like out and back, out and back, right?

[23:43.0]
And sometimes you can even get one picked up to go for export. There's. There's some imbalance. There's more import than export at the ports we're working with. But, you know, every Once in a while you can, you can even minimize this thing. So let's say, let's say for example, the, the import export is something like let's just say 75. 25.

[24:02.2]
Right. 75% are imports. 25. If you could match up all the exports, then you only have 50% where you have to really waste the miles. Right? You've, you've effectively reduced that 100% of the reducible miles. We couldn't get there obviously because these environments are too noisy.

[24:17.5]
But the way that, the way that I set this up was to say, well, what you want to maximize is the number of loads that you pick up of whatever is like available to be picked up. And you know, simultaneously sort of minimize the miles driven and maximize the pay for the driver.

[24:35.8]
So it's a multi objective optimization. And then again, my trick was to set up a very specific measure to say like how well we've done on that. Yeah, yeah, right. But basically, basically I just use a genetic algorithm, several genetic algorithms. And I use Julia because Julia will crush it. Python will take forever. I'm telling you this from experience, Python takes forever.

[24:54.6]
Like I would have to go to lunch and come back and wait another hour. Julia does it in a couple of seconds. So big, big big advocate for Julia over here. Hi. So anyway, I use a genetic algorithm again, this evolutionary programming thing to sort of minimize.

[25:13.4]
And again the global optimum. We had, one of my co workers, very smart guy, found the global optimum. But it took like hours and hours and hours and hours and hours to the point where the shipments would be dead, the shipments would have, need to have been shipped already. Right.

[25:29.6]
I need to make this decision in the next hour. I don't have 12 hours to make it right. So getting this actionable intelligence quickly really paid off massively. And so I highly advocate for this in the logistics space. There's so much noise involved around global supply chain logistics.

[25:50.3]
You don't know, maybe there's a quality control issue at some fabrication plant. You're shipping wafers, to make new GPUs, but this whole batch is going to be compromised. Well, I mean that didn't fit into your global optimum because you didn't account for, you know, messed up chips going in your supply chain, just losing that whole route.

[26:12.1]
So getting actionable intelligence quick is worth way more than getting a global optimum in, in reality, not, not scientifically, scientifically it's a different pursuit. But in reality if you're if you're concerned about actual business, get, get that actionable intelligence quick good enough is really where you need to be.

[26:32.1]
So where do you feel like these projects get stuck? Is it, you know, data integration or you know, civilization or automating insights? So where do you feel like these projects or a company logistics, and AI struggle with, you know, connecting the dots.

[26:52.9]
This, this is my take on it and I could be way off base, but what I've seen, at least in my experience is that just in general, tech companies tend to pick, a tech stack and they want to pick their technologies before they've solved the problem. And so when you pick a tech stack, you're sort of pigeonholed into trying to either solve someone else's problem, which wasn't your problem, and then you're going to get some weirdo wonky solution which you're trying to just, you know, fit into your solution, or you end up having like, way too much excess or idle compute or you say like, oh, well I'm definitely using this pipeline.

[27:28.5]
What if the data doesn't come in that format at some time? In logistics, if you've seen this before, any customer that sends you data sends their own data format. Some are in Excel files, some are in XLS files, some are in XLS files, XLSX files, some are CSV, some are Parquet, some are Parquet 2, some are JSON files.

[27:49.9]
There's misspellings. They write origin city with a capital O with a lowercase o. Some are underscored, some are camel case. There's no like data format. And you have this pipeline that says it has to come in this way. Well, good luck to you. I say good luck to you. Trying to format all of those different data types into one uniform concrete data type to fit your tech stack.

[28:10.1]
I think this is it's just madness. And one of the things that is, it's frustrating but necessary is that there is a lot of manual labor in doing data science around logistics. That's necessary and not avoidable.

[28:26.8]
Unless somehow you have decided to require, your customers to pre format their data in your way. And the only players that are big enough to do this are like the Fortune 100s, right?

[28:42.5]
A little, a little logistics brokerage is not able to enforce all the customers to put their data in the same format. You're just going to lose that customer, right? So I mean one of the mistakes is picking what you're going to do as your tech stack before you solve the problem. And the Other is, underestimating how much manual labor you actually need to, to work through this data.

[29:04.8]
So, let's say if, you know what if you had a system that could, you know, automatically surface the operational challenges and logistics or you know, optimization opportunities every morning. So what would you want to highlight? You know what that Would a real time customizable analytics assistant be useful or you know, in managing distributed or logistics heavy operations?

[29:29.9]
So what would you want to highlight in that system?

[29:36.4]
I think if, maybe let's specify it here. If we want to talk about, Do you want to talk about over the road shipping or you want to talk about ocean shipping? Let's talk about ocean shipping, for example. Ocean and intermodal shipping. Right. The boat comes into port, you pick it up with a truck, you drive it in.

[29:55.9]
If you have good knowledge of like an expected value, if your statistics are fairly certain, right. You know, you, you know that you have scheduled all these ships coming from China or from India or from Vietnam and you know how many units they have.

[30:13.4]
You can plan on some, some high percentage of them getting there in about the time you want. Right? So the automation in this case is good. You can say, okay, expected came in these. So these are the ones available for matching Right.

[30:28.6]
So I expect this on this day. If it doesn't, then you, you know, I would, I would do some sort of price adjustment for, risk mitigation. You have some sort of log normal distribution and you have to calculate the tail end of it. Because another thing in logistics that I think people forget is one of my favorite things.

[30:45.7]
You can only be a few minutes early, but you could be really, really, really late. You can't be infinitely early, but you could be infinitely late. Right. So that, that the distribution of time and delivery is skewed asymmetrically. Not in your favor.

[31:02.7]
It's skewed asymmetrically against you. And I think a lot of people forget this. They just model it as normal curves or short log normal curves. Really, they're like really fat log normal curves. They're really, really fat log normal curves.

[31:17.8]
So take time into account and don't get caught out by some asymmetric fat tail that's really, really, really going to knock you down. Right. It's not skewed in your favor, it's skewed against you if you're the shipper.

[31:34.0]
Agree, agree. So I, think, So you must have definitely tried, you know, solving these problems and you must have came around with few solutions as we have an AI today. Right. So, how, what was your approach, what was your take, you know, in solving this real time problems, you know, and let's say if you're building something AI, so how do you, you know, if AI goes wrong, who's accountable for that?

[32:01.8]
Or you know, how do you fulfill that accountability gap? I mean this, this is a billion dollar question. I have several friends in Europe and they basically only automate AI up to a point because this question is open.

[32:20.6]
Who takes the accountability if something goes wrong? I mean obviously the easy answer is the company who's deployed the AI has to take the accountability. But again, the asymmetric payoff is so negatively skewed against you that no company really wants to take this risk. Right? Someone dies.

[32:37.5]
Someone dies because your AI made a wrong decision, because you programmed the rules to focus on some KPI other than safety. You're dead, your company is dead, your whole company is dead. So I would advocate actually honestly against automating as much as you can.

[32:54.2]
Only things that are just like really, really set in stone should you think about automating. Right. One of the things that I think is old, it's old school. But cron jobs are really good, right? You say what's in this morning? What can we plan out for the next week?

[33:10.3]
And you could just run that every morning at 1am or 6am just to have a script that goes through and just says, what do I have available to me? Don't plan too far in advance because again, you have this lateness issue. Something is shipped late and it got lost at sea and it actually ends up showing up six months late.

[33:27.2]
You still have to account for it. You have 750,000 containers that you have to get rid of. You're really sunk because you weren't accounting for this lateness. The big problem in trying to automate AI, at least the way that the industry is trying to do it now, is that we're trying to over smooth out things.

[33:49.1]
We're ignoring all these like fat tails and things that really could go wrong. And you know, I think a lot of, a lot of trucking companies in the United States went bust because after Covid, there wasn't this upswing like we expected it to be. Little private companies and personal delivery services had popped up and just stole the business out and it just, it just didn't return.

[34:10.0]
But they had built these contracts very far into the future, like many years into the future. And that business died. And what do you do? What do you do? You have, you have all this financial accountability that you've automated and that money doesn't exist anymore. Well, what happened is a bunch of companies folded, including Convoy, which was backed by Jeff Bezos. Right.

[34:29.1]
So, I mean, Convoy was a massive thing and it put like $2 billion of freight back on the market in one day. So. So be careful. The, the message in automating is be careful. That is the message.

[34:45.7]
Do a cron job. Old school. I know it's old school, but guess what? It works. It works. Deal with what you can for like a few days out. And, and don't try to go too far in the future because time and volatility will, will. They'll hurt you. Automate as little as you can get away with.

[35:05.3]
Completely, completely with you on this point. So you think what is the, you know, biggest myth that we have right now about success in AI? Right, and that you would like you can do it? No, I mean, it's. That's quite simple. The biggest myth in AI is that there's a, there's a way to do it.

[35:21.3]
That this is not a, this is not a, an industry where there is a universal prescription for success. All you can guarantee in AI is like, staying alive another day. Right? I mean, you know, I went to, I was in Singapore a couple weeks ago for, Hack Seasons and they, they gave me a fireside chat and they said, oh, how to build a successful AI platform.

[35:42.3]
And the immediate first thing I said was, this is not the title of my talk. The title of my talk is how to Not Fail at AI projects. Right. Coming. Coming from industries where there's a lot of noise, stock trading is the best one and electricity trading is even more. What you have to do is survive to the next day to, to win at.

[35:59.3]
Building AI is not failing. And that's, maybe that's not the hopeful, optimistic message people want, but that is the reality. If you can survive to keep playing an AI, you've won. That's it. There's no way to, to, for me to just tell you, hey, Shivani, if you did X, Y and Z things, your AI would succeed. Maybe, maybe not.

[36:16.8]
That's just rolling the dice, right? That's, that's just like I said, a thing. And maybe it works and maybe it doesn't. But what I can tell you is there are some like, fatal mistakes to avoid. Like setting your tech stack, trying to automate too much discounting, fat tail discounting stuff that could go against you really heavily at the drop of a hat.

[36:36.7]
If you discount these things, you're probably going to fail. If you don't discount these things, you're much more likely to not fail. I don't want to say succeed, right? But you're much more likely to not fail. And as your competitors have these problems that they fail, they fail, they fail, they fail. And you didn't fail this, you're, you're winning now.

[36:52.7]
Now you've actually made a successful business just by not failing, you've avoided dying. Right. And, and you know, it's, it's, it's a little tongue in cheek to say it, but how do you live forever where you just don't die? It's not about your nutrition, it's just about not dying. That's, that's how you live a long time.

[37:08.7]
You just don't die for a long time. That's how you do it. And sorry, I can't tell you what's the best diet and what's the best exercise regimen. We don't know. But if you don't die, you, you certainly have lived a longer time. That, that's, that's superb. You know, that's a super talk.

[37:25.0]
Now let's, let's move to exciting segment. That's a lot of challenges and everything. So now let's move to an exciting part of the podcast. I absolutely love. Now here comes the rapid fire question.

[37:42.6]
So, what do you think? Having people in house or outsourcing services? Outsourcing. I prefer in house. I prefer in house. Okay. One logistics or AI metric? Everyone track but, few truly understand.

[38:01.5]
A logistics trap that, everyone falls into. I mean, one logistics or an AI metric that everyone tracks, but few truly understand it. Ooh. On time deliveries.

[38:21.0]
Perfect real time visibility. Achievable reality or a comforting illusion? A little bit of both. More illusion. But, there are some success stories, mostly illusion. AI and logistics optimization. Overhyped.

[38:39.3]
Or still underutilized? Underutilized. Tremendously underutilized. Most difficult part of turning raw data into actionable intelligence people, it's the format that other.

[38:57.3]
Someone else touched this thing and it's now messy. The math is easy. The people are hard. Yeah. Yeah. So when reviewing results, do you prefer dashboards or deep dive reports? What do you prefer Deep dive reports for me. So cost per mile or a cost of uncertainty?

[39:17.5]
Which matters most more in long term planning? Oh, in long term planning. Of uncertainty. Costs of uncertainty. Absolutely. On Daily operations cost per mile is quite good. How unique do you think a connected intelligence layer is for logistic companies today?

[39:41.0]
How, how important is it? What's the question? How unique do you think a connected intelligence layer is for logistic companies today? How is it? How unique is it? Yeah, in terms of how many companies have it? Not, not a whole lot of companies have it, but a lot of companies may claim to external partners or you know, in house teams which build more scalable solutions.

[40:06.0]
In your experience in house teams, biggest blind spot you see in how large companies major performances. Discounting fat tails. Discounting like big, big, big asymmetric things against you. They just complete.

[40:23.5]
Yeah that automation Or a human intuition which will dominate decision making in next five years.

[40:33.8]
Human intuition. It's, it's going to come back really strong. There's a caveat. We'll talk about it later. One insight from your research or project that surprises most people. Random matrices are useful outside of nuclear physics.

[40:51.3]
If multiple stakeholders shared operational intelligence openly, what impact would that have?

[41:02.9]
Unfortunately it probably caused suspicion amongst other. It would cause like this conspiratorial suspicion amongst competitors rather than build it up. Perfect. So if you could improve one thing in logistics, analytics tomorrow, what would it be?

[41:30.0]
For me it would, it would be like making sure that drivers get paid immediately. Like liquidity. Liquidity of payments from drivers and shippers. Yeah. And when it comes to visibility tools, what's more important? Accuracy, speed or usability?

[41:49.0]
Accuracy but with the correct metric. Perfect. With that said, we are done with the rapid fire questions. Well, let me, let me give, can I give, can I give a little thing about the accuracy? Yeah, sure, sure. Please go ahead. We talked about this before but right.

[42:04.1]
This, this is where accuracy is important but not accuracy. It's the correct metric. Right. If I want to determine which train wheel is going to fail. I think we talked about this before, right. I have a thousand train wheels and only one of them is going to fail. If I just say none of them fail, I'm 99.9 accurate. But if one fails again, that asymmetric payoff is so negative against you that you, you don't want to face it.

[42:25.6]
So you shouldn't maximize accuracy here. What you should do is you should minimize false negatives. That's, that's perfect. That's super. So before we go, I'm a co founder of Energuice, which I guess I failed to say this at the very beginning but we're a renewable energy and post quantum security IP firm.

[42:42.2]
So what we're looking at is optimization and energy distribution, especially with renewable energies. And we're trying to plug that into all manner of things including logistics for smart car EV charging. We're also looking at protecting information and being able to distribute this information in a secure fashion wirelessly.

[43:00.4]
A couple of interesting projects we're working on and I just picked up with Argentum AI which is looking at cross border high security compute power. So if you have excess compute that you want to sell into a network, we're trying to find people to use this. So this, this we hope will do a really big number on logistics companies that don't need to have cloud computing or don't need to house all their stuff on prem.

[43:21.6]
So a couple of really interesting projects I'm working on, both AI and optimization related. I appreciate you guys for having me on. This was a lot of fun. Maybe next time you make me do a lot more rapid fire questions so I get myself in trouble.

[43:38.8]
Super exciting. And it was really nice having you actually on the podcast, you know, taking out your valuable time and giving us superb answers and super cool stories and introducing us to the real life challenges and you know, the approach that you take.

[43:53.9]
So that's, that's awesome.

Host

Shivani Rajput

Business Analyst
A business analyst with two years of experience supporting requirement gathering, proposal development, and end-to-end pre-sales and post-sales processes. She is passionate about using analytical insights to solve complex challenges, streamline decision-making, and contribute to organizational growth.

Guests

Clark Alexander

Chief AI Officer, Argentum AI
Mathematician and AI practitioner working across optimization, scientific computing, and real-world systems. Clark’s interests span logistics, renewables, quantum simulations, finance, and computational biology, with a focus on bridging high math and high tech.

Have a project in mind?

Read