893 Gemma Galdon Clavell: AI Safety Auditing

Melinda Wittstock:

Coming up on Wings of Inspired Business:

 

Gemma Galdon Clavell:

Often the conversation around technology is a conversation around science fiction and not really the technical capabilities of these systems because oftentimes their understanding of how these systems work is more informed by potential futures and films than what these systems actually do. So, what are the guardrails? What are the things that we need to put in place to make sure that these systems are trustworthy and safe? They take data from the past; they look for patterns in past data and they seek to reproduce those patterns. These systems are really bad. if you want to take data from the past and look for patterns and reproduce those patterns, that is a great model for GPS, the way that most people go from a to b, from New York where I’m based, to DC, it’s probably quite similar and stable over time. But then if you use that same model in banking, let’s look at the past clients that I had who were the best clients, and let’s try to reproduce those same clients. Then women all of a sudden disappear from the picture because in the past we have not been the pattern of customer because we make less money. Unless you incorporate mechanisms to ensure that these systems work well, if they understand their own limitations, you’re going to end up with really problematic systems.

 

Melinda Wittstock:

When it comes to AI, there’s hype, and there’s reality; there’s fear mongering and there’s immense potential. Gemma Galdon Clavell is a pioneer global force in AI safety and auditing, on a mission as the founder and CEO of Eticas.ai to ensure that machine learning tools truly serve society by measuring and correcting algorithmic vulnerabilities, bias and inefficiencies in predictive and LLM tools. Today we dig deep into the potential and vulnerabilities of AI, how to eliminate biases, and why women’s voices are vital to ensuring AI serves society best.

 

Hi, I’m Melinda Wittstock and welcome to Wings of Inspired Business, where we share the inspiring entrepreneurial journeys, epiphanies, and practical advice from successful female founders … so you have everything you need at your fingertips to build the business and life of your dreams. I’m all about paying it forward a five-time serial entrepreneur, so this podcast is all about catalyzing an ecosystem where women entrepreneurs mentor, promote, buy from, and invest in each other …Because together we’re stronger, and we all soar higher when we fly together.

 

Melinda Wittstock:

Today we meet an inspiring AI entrepreneur busy identifying, measuring and correcting algorithmic vulnerabilities, bias and inefficiencies in predictive and LLM tools. Dr. Gemma Galdon Clavell’s company Eticas, with its ITACA platform, is the first solution to automate impact analysis and monitoring, ensuring that AI systems are high performing and safe, explainable, fair and trustworthy. In 2023, the BBC acknowledged Gemma as one of the “people changing the world” and this year she was honored by Forbes Women as one of the “35 Leading Spanish Women in Technology”, praised as “a pioneer in algorithmic auditing software”. She was also honored by the United Nations in 2023 as a Hispanic Star Awardee, and is an advisor to international and regional institutions such as the United Nations (UN), the Organization for Economic Cooperation and Development (OECD), European Institute of Innovation and Technology (EIT) and the European Commission.

Melinda Wittstock:

Gemma will be here in a moment, and first:

 

[PROMO CREDIT]

 

What if you had an app that magically surfaced your ideal podcast listens around what interests and inspires you – without having to lift a finger?  Podopolo is your perfect podcast matchmaker – AI powered recommendations and clip sharing make Podopolo different from all the other podcast apps out there. Podopolo is free in both app stores – and if you have a podcast, take advantage of time-saving ways to easily find new listeners and grow revenue. That’s Podopolo.

 

Melinda Wittstock:

Artificial Intelligence promises to change how we live and work, though it’s far from clear how much and how fast, what’s real and what’s ‘SciFi”, or how it will be implemented to combat the bias inherent in historic training data so everyone can benefit from ethical innovation.

 

Melinda Wittstock:

Today we talk about the pace and limitations of innovation, as well as the risks and where we are likely to see the biggest problems and benefits alike. For instance, venture investors have traditionally relied on pattern recognition to pick the startups they invest in, so with large language models based on historic data, might that perpetuate biases against female and diverse founders? And with the immense cost involved in training the LLMs upon which AI applications are based upon, there is a risk that concentration of ownership by tech giants Google, Meta, Microsoft and Amazon may thwart innovation rather than advance it.

 

Melinda Wittstock:

Dr. Gemma Galdon-Clavell is a pioneer AI safety and auditing, on a mission as the CEO and Founder of Eticas.ai to ensure that unsupervised machine learning tools truly serve society and that AI systems are high performing and safe, explainable, fair and trustworthy. She shares the limitations of current technology, why small and high-quality contextual models have the most upside potential for a range of applications from travel to entertainment, plus the promise of Blockchain and decentralized approaches to solve for disinformation, content ownership and other AI risks. We also talk about what it means to be a woman innovating in technology, plus how VC must change to enable diverse founders, ethical use of AI and social impact, and the role entrepreneurs can play in ensuring fair regulation that advances business while protecting consumers.

 

Melinda Wittstock:

Let’s put on our wings with the inspiring Gemma Galdon Clavell and be sure to download the podcast app Podopolo so we can keep the conversation going after the episode.

 

[INTERVIEW]

 

Melinda Wittstock:

Gemma, welcome to wings.

 

Gemma Galdon Clavell:

Hi.

 

Melinda Wittstock:

I am excited to have you here. You’re all about AI safety, which I think is one of the biggest issues in our world right now. Let’s start at the beginning. What made you interested in AI and in particular, AI safety?

 

Gemma Galdon Clavell:

Oh, wow, that’s a long story. I’ve been working in this space, what initially was called responsible technology, then issues around big data. But I’ve been doing this for over 15 years now. I wrote my PhD on the challenges of technology from a policy perspective. And so, I guess what I’ve done is kind of keep up. Like what I worked on 15 years ago, there was, like, super niche. No one talked about it. No one understood what these data systems did, what software was.

 

Gemma Galdon Clavell:

And today we have the most powerful computers in our pockets through our mobile phones, and we interact with technology every day. I guess my challenge has been, like, to keep up. Like, once I saw that my field of expertise was broadening, I was like, okay, I want to be at the center of this. And I think I’ve managed to kind of ride the wave. So, really happy to continue to work on this space 15 years later.

 

Melinda Wittstock:

Well, the pace of change is just so huge, and there’s so many issues around AI and these large language models, like how they’re being trained. Right? And are they accurately representing all voices in society? And there’s not a lot of transparency. There seems to be a lot of centralization. What’s your feeling about where we’re at right now with these, the concentration, like with OpenAI and Anthropic and, you know, Google and whatnot, and how these models are actually being trained.

 

Gemma Galdon Clavell:

Well, one of the problems that we’ve had for a long time now is that often the conversation around technology is a conversation around science fiction and not really the technical capabilities of these systems. And so, for us, it’s really hard to have good conversations with even technologists and policymakers, because oftentimes their understanding of how these systems work is more informed by potential futures and films than what these systems actually do, which is incredibly problematic. And so that means that it’s really hard to have a conversation on. So, what are the guardrails? What are the things that we need to put in place to make sure that these systems are trustworthy and safe? Because everyone’s kind of looking towards the future. Like, you know, the robots taking over some years ago, and now it’s like this software and making decisions for all of us and kind of training itself and reprogramming itself and taking away all our jobs and all these kind of promises that, again, have more to do with things that we’ve seen on television than what these systems actually do. Like LLMs and any other decision-making system. Like the way that a bank decides whether you get a mortgage, the way that you are a doctor probably decides whether you get a cancer treatment or not, or a hospital decides how long you wait in the emergency room or whether you get a job or not. When you send your cv and it goes through an AI system before it can be seen by human eyes.

 

Gemma Galdon Clavell:

Like, all these systems work in very similar ways. They take data from the past, huge amounts of data from the past. They look for patterns in past data and they seek to reproduce those patterns.

 

Melinda Wittstock:

Yeah. So, it keeps us stuck in bias.

 

Gemma Galdon Clavell:

Not only in bias, but also in what happened in the past. These systems are really bad. For instance, at new things, like at the beginning of Covid-19 you couldn’t use any AI tools because there was no history. We didn’t have historical data to train those systems, and yet a lot of people use them. And so, they came up with things that were very much off mark, but that’s what they do. And in some use cases, that is super useful, but in some others, it really doesn’t work, and it should never be used. But I think that because everyone’s thinking, oh, but they will get better. Like, they’re not there yet, but they will get there.

 

Gemma Galdon Clavell:

Well, actually, that’s not what we’re seeing. Like, we are seeing that these systems can get better at doing this narrow thing that they do that is incredibly valuable, but it’s not useful in any scenario. Like, you know, if you want to take data from the past and look for patterns and reproduce those patterns, that is a great model for GPS. For instance, the way that most people go from A to B, from New York where I’m based, to DC, it’s probably quite similar and stable over time. But then if you use that same model in banking, you’re like, okay, let’s look at the past clients that I had who were the best clients, and let’s try to reproduce those same clients. Then women all of a sudden disappear from the picture because in the past we have not been the pattern of customer because we make less money. It’s usually been men that had the accounts and that manage the house from a financial perspective. So again, that same technical logic, that makes a lot of sense in recommenders, in GPS or Netflix and all these platforms, or like in the entertainment world, when you move them to high-risk scenarios like hiring, banking, then unless you incorporate mechanisms to ensure that these systems work well, if they understand their own limitations, you’re going to end up with really problematic systems.

 

Gemma Galdon Clavell:

And with LLMs, that’s exactly the same. So, you take a lot of written data from the past or images from the past, you look for patterns, and you reproduce those patterns. It’s all about the structure, it’s not about the content. And so, one of the issues with LLMs is that they cannot differentiate between correlation and causation. And that’s. And when, when you look at a sentence like, it may look like it makes sense, because it’s. It’s like the structure is correct, because that’s what they’re experts on. These systems capture structure and reproduce structure.

 

Gemma Galdon Clavell:

But once you read the meaning, you’re like, that doesn’t make any sense because the system doesn’t understand. It just reproduces grammar patterns that are based on their structure. So, like I asked an LLM the other day, how old I am. How old is Gemma Galdon Clavell? And the LLM said, Gemma Galdon Clavell is 62. That’s incorrect. I am not 62. The thing is, why did an LLM say this? Well, it may be because I have someone in my family who is 62 who is fairly visible. And so maybe the LLM again saw my name associated with another name and that number and decided that was my age.

 

Gemma Galdon Clavell:

Maybe my name appears in page 62 in a couple of news pieces, the talk on AI. And again, that’s the difference between correlation and causation. A human would be very quickly to see that the number in the page and my name is not an indicator of my age. But for an LLM, it’s just a number next to me, and age is a number. And so that’s the information that I’m given and understanding that these things, that hallucination, bias, these are not bugs, these are features in AI. These are things that will always be there, will not get better with time or more data, they will actually get worse. So, unless we create mechanisms to understand those dynamics of bias and hallucination and compensate for them, we will not have good AI. We will be using AI systems that lead us to making the wrong conclusions and lead us to giving jobs to the wrong people, mortgages to the wrong people, and recommendations to the wrong people, which is extremely problematic when you think that most businesses are trusting these systems to make better decisions.

 

Melinda Wittstock:

One of the things that I think is interesting in the whole prompt engineering kind of debate, like in any system like this, it’s only as the answer that you get is going to be as good as the question you give. So, if you have a question that doesn’t provide context because the AI models themselves don’t necessarily, you know, aren’t going to be able to see the context of something, I mean, this is, this is a big issue. Are these capable of trying to analyze anything in a contextual way, or is it only as good as the prompt?

 

Gemma Galdon Clavell:

No, it’s actually, it’s worse than the prompt, because again, when you, so your prompt just triggers a dynamic of neural networks and association in the training data. And so, what the systems does is link the words in your question with the words in the training data set and then comes up with something that is statistically relevant. The keywords that you have used in the past have been used together with these other words. That’s what they can do. And I think we’re all amazed by how that putting together keywords with text that uses the same keywords can do, and how these sentences and texts read like really good material, like something that a human could have written, but that’s what they do, and that is great, but that is also limited. Like, you know, don’t, don’t ask one of those systems, don’t ask it to help you self-diagnose, for instance, or don’t try to look for information that is credible because it will not give it to you, because the training data that they have is not organized data. That’s the thing with AI. These are not expert systems.

 

Gemma Galdon Clavell:

These are not. In the past, we had data systems in big data. The paradigm was, I’m going to train this system with my knowledge. I want a system that is used in schools. So, I’m going to train it with good information on who have been the presidents of the US and what have been the war that the US have fought on. And someone would validate all these, this information before it went into those systems. But that was really expensive. So, what AI today says is like, actually, I don’t need a human with their expertise to train those models.

 

Gemma Galdon Clavell:

I’ll just take anything I can find and use that to train the model, and the model will find what is relevant and what is not relevant. And we’ve seen that these AI models, LLM models, go halfway, like, they’re very good, they’re better than we expected. We thought that without expert knowledge at the beginning, in the training, they were going to perform worse. They’re performing better than expected, but they’re performing really bad, really badly, because no one’s vetted the information that they received beforehand. I would say that the key is not really the prompt. The key is in the training data. And because of the ways that LLMs are trained, there’s no guarantee that the information that goes in is information that we can trust or information that has eliminated the. The pieces of information that could lead to correlation and not causation.

 

Gemma Galdon Clavell:

The lack of attention to all of those issues means that we have really bad quality systems that, hey, you can use. If you want to write like, an easy letter, like, I want to send a letter to my lawyer or a formal letter to a potential employer, you can use them for that because there’s a lot of information online that LLMs can use and what it’s going to give you, it’s like a standard piece of text. But when there is no standard, you cannot trust LLMs. They’re just, they’re just not there. That’s not what they’re good at.

 

Melinda Wittstock:

Yeah. So, when you hear all these people talk about AGI, right, and singularity, or you hear, like, Ray Kurzweil talk about how the humans were all going to merge with kind of AI by 2050 or whatever, right? What’s your view on all of that kind of talk? Is that all Sci-Fi or, like, what’s the potential?

 

Gemma Galdon Clavell:

There’s nothing in my work that leads me to think that these systems will lead to anything that resembles human intelligence. People that claim that we are on that path. It’s like thinking that because I can walk and I can run in the future, I will fly. I won’t. You and I know that I will not fly.

 

Melinda Wittstock:

So, all the investors in this space, like, there’s a lot of hype, there’s crazy valuations, there’s a lot of kind of crazy talk. I can see where AI can, you know, lead to some efficiencies. You know, can really sort of be an assistant in many ways.  But, man, is this just a big con?

 

Gemma Galdon Clavell:

I don’t think it’s a con. I think that AI is great again if we understand its potential, but also its limitations. I do think that there’s a bubble around AI, and I do think that a lot of people are going to lose a lot of money because, again, there’s a logical fallacy. You understand that the fact that I can walk doesn’t mean that I can fly, and you and I can talk about why that is the case. I don’t have the wings. My muscles are not trained or made for that. That’s not what I was designed to do. There’s a lot of physical issues and anatomical issues that make us be very clear that I will not fly, even though I can walk and run.

 

Gemma Galdon Clavell:

For most people, understanding the physical and anatomical issues that go from, or that, like, the limitations that go from saying these systems will end up being able to be like humans, we don’t have the knowledge. Most people don’t have the knowledge to inspect those systems in that way. And so, we need to take those promises for granted. And unless you have that knowledge, you cannot kind of see beyond the hype and be like, actually, what I said before, like, there’s nothing in my work that leads me to think that we are getting closer to reproducing human intelligence. Intelligence. But again, look at how these systems work. Like, it takes one baby, you take any baby in the world, you show a baby, a cat, one day, for 10 seconds, that baby will be able to identify a cat in any position, any color, any stage of development for the rest of their life. For an LLM or an AI using image to identify a cat, we need to train it with thousands, hundreds of thousands of images of cats.

 

Gemma Galdon Clavell:

And still the margin of error is really, really high. Like, the way they work is completely different from how humans work. So, for someone to think that these AI systems and models that require a process of learning that is so different from how humans think, so to think that we’ll go from that, from mathematical training based on images and patterns to doing what a one-year-old does with the image of a cat or anything, there’s, again, there’s a jump that no one has been able to explain. Like, what makes you think that those mathematical, pattern-based systems will evolve into something that resembles the brain? There’s nothing that allows me to tell you today that we are moving in that direction.

 

Melinda Wittstock:

This is so interesting to me because I’ll give you a use case, say, my company, and how we use AI in the podcasting space. We have hundreds of millions of episodes on any given day automatically ingested. We can go back in time, current time, and do things like generate, obviously, things like transcripts, but also isolate relevant clips, put those into a recommendation engine that gives someone who’s interested in that topic, that clip. Social media posts, you know, newsletters, workbooks, you know, these sorts of things, right?  It’s basically a huge kind of repurposing engine, right, It helps podcasters, get the word out about their shows, saves them time, right? And, you know, it’s very accurate. It can start to learn the voices of people, so we can kind of write in the context of that. We can use it is to say, okay, so this is a good podcast for this advertiser because, say, we know about the audience, and we know about the content of the podcast. So, this is the ideal episode. This is the ideal placement in an episode, right? So, in that realm of data and insights and automation and repurposing of content, right, that’s something that for us, like, is working really well.

 

Gemma Galdon Clavell:

Super powerful use cases. Like, again, the ability to go through millions of pieces of data and find patterns. Like, we cannot underestimate how important that is. Like in medical imaging, for instance, being able to identify patterns is great. Like in policing dynamics. Like, you can feed a system with lots of crimes, and it gives you dynamics and patterns of crime that allow you to see the global picture. Like, these systems are so, so, so useful.

 

Gemma Galdon Clavell:

They’re just not humans. And if anything, what they’re doing is making us better, but also highlighting the parts of our work and our expertise that make us human. Like you were saying, you know, when we ask it to come up with a new podcast, that we see that the value is not, is not really there, but in lots of other things, it puts the human in a more critical position because the human is the one who’s like, okay, so with the help of AI and with the context that I have and the understanding that I have that the AI doesn’t have, we can come up with a much better decision and a much more efficient decision. So, I think that the obsession with the replacement of humans is stopping us from making the most of what AI is best at, which is complementing us. And I think in this interaction between humans and AI systems, that we will see a lot of value. I don’t want my doctor to be replaced by an AI system, but I want my doctor to have the tools of AI to understand that maybe my very rare disease is not that rare, and that people with my very rare disease have these dynamics or these patterns in their comorbidities or any other symptoms that they may have that help my doctor make a better decision so they can empower us with information.

 

Melinda Wittstock:

The pattern recognition piece is really interesting, and I have a couple of thoughts on this, like, going back to the really beginning of our conversation. If you think about it in terms of trying to figure out who your ideal customers are, or say, a venture capital fund that’s already investing based on pattern recognition and their pattern is a guy in a hoodie in a garage eating ramen noodles who’s dropped out of MIT, Harvard or Stanford. Right? Literally, the bias in that and why women get less than, just slightly less than 2% of the venture funding is that it takes that bias and puts it on steroids. So, I got lots of VC’s now are just using AI to kind of qualify who they’ll even talk to.

 

Gemma Galdon Clavell:

Exactly. And I think we all suffer from that, and we all agree that we don’t want that. Like, we don’t want a world in which not just funders, but anyone, what they do is just reproduce the dynamics of the past. Like, that goes against anything that we understand about innovation and the ability to see the outliers. I mean, I often say that, you know, at Eticas, we build software to identify bias in AI systems. And what we do precisely is to protect outliers. And I’m, and I, and this is very personal to me because I’m very much an outlier. You know, fundraising, as a woman in a tech space when I’m not, I’m not a tech person, but I think that’s, that’s my superpower.

 

Gemma Galdon Clavell:

Like, you know, my ability to get all this way, like, to be here today and have been defying the odds since the day that I was born. My mom was 14 when she had me. So, you know, it’s been a rocky road, but it’s like this ability to beat the odds. If you think about it, what AI does is kind of set those odds in algorithmic stone. And so, it’s harder for outliers to beat those odds. And I don’t want to live in that world. Like I want a world where the outliers can find ways and where there’s cracks for those outliers. I’m afraid that even in the pre-AI world, it was already hard for outliers.

 

Gemma Galdon Clavell:

Now AI is kind of closing all those cracks that we’ve been able to kind of survive on and we’ve been able to get visibility even though we were not supposed to. I want a world that preserves the ability of outliers to thrive.

 

Melinda Wittstock:

So that’s one of the reasons why the concentration of ownership of AI to this narrow. And I’ll just call it a bro culture in, you know, bro billionaire culture in Silicon Valley. I mean, it’s elsewhere in the world as well, but just the concentration of ownership of it where the, you know, two third, no, three quarters of the funding of AI has come from Microsoft, Google. Right. And Amazon. And you can throw in Andreessen Horowitz and whatnot. It’s already a very closed group. And when you say algorithmic stone, I really hear you there. Because the thing that leads innovation and solving the world’s biggest challenges from a social impact or sustainability or environmental or climate or any of these big pernicious problems we have are going to require and always have historically, thinkers that are outliers, people who see things differently just by virtue of a different experience or a different perspective. And this is why I think women and like minorities and like, just people who have different experience are really the people.

 

Melinda Wittstock:

So, in a way, the, you know how to set these algorithms in such a way that it’s actually that the outlier or the black swan or whatever is maybe the answer. Not all the lemmings like you’re looking for the black swan, not all the followers, right?

 

Gemma Galdon Clavell:

Exactly.

 

Gemma Galdon Clavell:

And that concentration is, I think it’s, I guess, Silicon Valley or whatever, you know, the tech bro space is a very good reflection of the risks of measuring everything based on trends and patterns. Like you just reproduce the same old, same old. And we have people in positions of power making decisions about investment of lots of billions of, of us dollars who have made really terrible predictions in the past about what the future was going to look like.

 

Melinda Wittstock:

Just even the banking industry you’ve had for years, for some years now, oh, there’s going to be a recession. There’s going to be this, there’s going to be, it’s like, wait, you’re looking around. No.

 

Gemma Galdon Clavell:

Yeah. And like, you know, I’m old enough to remember second life and then blockchain and drones, like all those things that were supposed to change reality forever. And it’s like. And all those things ended up having some specific use cases in which they’ve thrived. Not second life, but drones and blockchain. But they’re not the revolution that we were told they were going to be. And I think that AI is the same thing. Are there useful use cases? Yes.

 

Gemma Galdon Clavell:

Do they look like what we have today? No, they do not. I mean, when we audit, what we are seeing with LLMs, for instance, is a lot more value in small language systems than large language models. And so, if you want an LLM to help you, we are working, for instance, with medical providers. Why don’t we create LLMs or small language models for hospitals trained with data from the hospital? The chances for them to hallucinate are smaller because the information they receive is information that has been vetted, but then they’re also only used by trained staff. So, it’s hospital staff that have no incentive.

 

Melinda Wittstock:

That’s kind of our approach at Podopolo. It’s kind of a QLM in the sense it’s a quality large language model, or maybe it’s a small SLM. Like a small language model. Like built for a specific use case.

 

Gemma Galdon Clavell:

Exactly. Like, I’m convinced that the market for LLMs will change significantly in the months and years to come. What we have right now in those large corporations, if they cannot pivot to where they actually bring value and where they can minimize those instances of bias and hallucination, they do not have a business case. Like, who would want to incorporate an LLM that lies to people. Like, no one can afford the liability of something like that. So, I think that there’s, I mean, there’s a lot of people who don’t want to hear this, but the market is going to take those companies or either make those companies pivot or just support other companies that understand that these are the challenges that they have and that put all of their efforts in that. Like, I am very disappointed at the lack of effort from the main companies in the LLM space to incorporate these concerns and to tackle them in a proactive way. They keep dismissing them and just kind of hoping that they will go away.

 

Gemma Galdon Clavell:

That’s a very responsible way to lead a business, to be honest.

 

Melinda Wittstock:

Well, it is, but that’s, you know, there’s a track record for this, and it’s called social media, right. You know, I mean, where, like, I remember a couple companies ago, you know, in this whole debate where Facebook or Twitter x now says, oh, God, we have, we can’t control the bots. We have no idea of what that. Oh, come on, like you do, and you can. My previous company Verifeed had access to the whole Twitter firehose. We were doing unstructured data and pattern recognition on this, help people find their customers or their best influencers, that kind of thing. We could identify what was a bot and what was not.

 

Melinda Wittstock:

And if we could do it, our little, small engine that could, nobody can convince me that Facebook and Twitter, they just choose not to because their financial model depends on having a lot of users and like, and then you have all these people advertising on those platforms to bots, the company’s making a lot of money, and then there’s no recourse. Right? So, I mean, if past is prologue in that sense, right? And then the same people are doing the AI, I mean, that’s where I have a lot of concern.

 

Gemma Galdon Clavell:

But I think that things are beginning to change. I don’t think it’s going to be immediate, but again, social media is what opened the current era of investment that we are in. And so that’s the same dynamic we are seeing with LLMs and AI in general. But I think that’s going to begin to change because this emphasis on do it fully automated, build something that works for everyone, we’re seeing the limits of that. The best technologies we see over and over again are expensive. Like, if you want something that really solves problems, it will not be super cheap, your profit margins will not be 90%. If you want to bring value, you need to invest in what you are giving your customers. I think these companies have completely failed to put their customers at the center, and I think that’s something that you can only afford to do when you’re b two C, but those of us working in b two B, like, we know that providing value to our customers is paramount.

 

Gemma Galdon Clavell:

That’s the first thing that you want to do when your customers are the whole world, and no one can pinpoint them, and they don’t have any power because they cannot take money away from you because they’re not the ones paying for the product. I think that’s what’s led to a very anomalous economic space. But I think that that is beginning to run its scores and that the companies of the future will be companies that invest in providing values to clients that automate, but only to a level that is responsible and useful to automate in a way that doesn’t create liability for their clients, and also providers that are specialized, that are not trying to sell the same thing to a hospital that they would sell to Netflix. The risks in those two environments are completely different, and you need to understand that a hospital cannot afford to buy something that works in the entertainment space. I think we’re going to begin to see new companies and new entrepreneurs that want to bring value, not just to their investors, but also to their users, from a conviction that bringing value to users is what ends up and client is what ends up being, bringing value to the investors. When I say this, a lot of people say that I’m a social entrepreneur. Maybe I am. I’m very much driven by impact, but that’s what makes me proud of what I do as well.

 

Gemma Galdon Clavell:

I think there’s a space that has been kind of hidden by the dynamics of Silicon Valley that needs to reclaim its voice. And we need to say there’s other ways of doing business, there’s other ways of generating value, and there’s other ways of using technology that actually achieve the two things. They bring in money, but they also do something to the world that we can be proud of.

 

Melinda Wittstock:

Oh, I couldn’t agree with you more. I want to run an idea. We were talking about blockchain a moment ago. Blockchain was the big hype, and it was really the hype in the context of crypto, and it was very centralized and whatnot. We found real value for blockchain in the context of AIH, in the sense of actually being able to understand and the potential for understanding the provenance of the content, like, whether it’s who owns what or whether it’s what’s real, what’s a deep fake, whether it’s, you know, people being able to, for the first time, to, say, own their own data and such. What do you think about the use of blockchain with AI in a potentially decentralized and more transparent way? Do you see a use case there?

 

Gemma Galdon Clavell:

So first, let’s try to remember what people said about blockchain in the past, because I remember, and blockchain was not. Was not the idea, was not that it was going to be just another resource, technical resource. It was going to change the infrastructure of the Internet. The promise was, you know, everything’s going to be different after blockchain.

 

Gemma Galdon Clavell:

I’ve been inspecting AI systems for a long, long time. Like, I’ve been like getting my hands dirty and like opening them up and looking at the data and the models and the weights and all of that. And when you do that, what you find is that what blockchain achieves, you can achieve the same with alternatives that are more tested, so more reliable, less expensive and more efficient. So, the same decentralization, just linked, like linked databases. Like the same decentralization that you achieve with blockchain, you can often reproduce by just linking data databases and data sets and making them interoperable. And that allows you to do things that blockchain doesn’t allow you to do. In some use cases, not being able to remove information may be a good thing, but in most use cases, there’s a lot of noise in the data that you want to get rid of.

 

Melinda Wittstock:

That you might want to clean up. Okay, so like ownership or the provenance of, like, who created what or trying to figure out what’s misinformation or not or whatever. There could be a use case. But I’m seeing what you’re saying too, because if you have this permanent record of something that’s false.

 

Gemma Galdon Clavell:

Yes. Or a permanent record of a lot of things that are not relevant, and you need to keep it. So that becomes, the system becomes a lot more expensive because you need to keep a lot of data that is redundant or not useful. And we have ways of securely linking data and auditing the provenance of that data. So, we have mechanisms that give us the flexibility that blockchain doesn’t give us while giving us the same robustness. So again, I think there’s like, crypto is a clear case of the use case for blockchain, but that’s what it’s limited use cases. In the end, it’s not a new infrastructure for the Internet because it hasn’t been able to prove that it can do things in ways that are better than other alternatives. I think that’s right now that’s the main thing, blocking blockchain, that we can achieve the same thing that blockchain achieves in ways that are cheaper and more reliable and more efficient.

 

Gemma Galdon Clavell:

That may change in the future. I do think that the idea of the blockchain may bring benefit in the future. We haven’t come up with a use case yet that allows us to do that.

 

Melinda Wittstock:

It has tremendous potential. Again, I’ll use my company as an example. So, the ability to give, say, podcasters interoperable wallets where right now the only way they can authenticate their content is an email address associated with a public RSS feed just easily to manipulation and such. So, to be able to kind of automatically write those RSS fees to their wallet where it’s like it’s theirs, right? Like it’s just their content, they own it. You know what I mean?

 

Gemma Galdon Clavell:

Yeah. And for things like money, where you may want to keep track of every single transaction, that makes sense, but when you take it out of money, again, there’s a lot of data and noise that you want to get rid of and blockchain doesn’t allow you to. So again, I think it’s understanding that. I mean, I don’t. Maybe some technology in the future will change our world as we know it, but we have not yet seen anything that achieves that. We’ve seen a lot of really great innovations that make things easier and better for us that change our day to day. And that’s amazing. But that is it.

 

Gemma Galdon Clavell:

If your expectation is human life is going to change because of these systems, human intelligence will change. We will be able to reproduce humans in non-human corpses. I don’t see any of that happening, and I don’t see blockchain or AI taking us closer to anything resembling that. I would say, like, let’s make the most of what we have. Like, that’s not to say these systems are not good enough. These are amazing systems. Let’s make the most of them. But let’s cut the hype.

 

[PROMO CREDIT]

 

Growth Secrets. Join me together with Steve Little – serial entrepreneur, investor and mergers & acquisitions maestro – as we explore the little-known 24 value drivers that spell the difference between a $5m business, and a $50mm even $500 mm business. That’s Zero Limits Business Growth Secrets, produced by Podopolo Brand Studio at zerolimitsradio.com – that’s zerolimitsradio.com and available wherever you get your podcasts.

 

Melinda Wittstock:

And we’re back with Gemma Galdon Clavell, CEO and Founder of Eticas.ai.

 

[INTERVIEW CONTINUES]

 

Gemma Galdon Clavell:

Let’s sell things that actually work and bring value to our clients. And let’s, let’s. Let’s stop selling vaporware and let’s stop selling to VC’s that are whose business model is to buy vaporware. Like, I’m much more interested in investors that want to buy into my ability to solve problems. And I think that there’s a lot of us out there.

 

Melinda Wittstock:

So that’s a great segue into the fact that Eticas AI is venture backed. You raised money and so you found that aligned investor. First of all, congratulations on that. How hard was that?

 

Gemma Galdon Clavell:

It was actually very easy, and it’s now very hard. So, it was very easy initially because I didn’t. But I know that now. My past is in consultancy and the nonprofit world. So, like, I didn’t know anything about this, about this world. I was just very lucky to bump into someone who said, I can help you. And I was like, okay, let’s do this. Like, I’m fearless, and I’m very eager to learn. So, I jumped into this, and I very easily found angels. And the great thing about angels is that they are allowed to see outliers. They are allowed to believe in something. We raised our initial round very quickly, but as soon as we moved into VC funding and we have some VC’s in that initial round. But now when we’re trying to focus on VC’s, we see that their dynamic of what they usually found is what is what they usually fund is very damaging to us because, again, I’m not the face of their usual. Their usual. The person that thinks, ironically.

 

Melinda Wittstock:

Ironically, Gemma, they’re creating your future clients.

 

Gemma Galdon Clavell:

Oh, totally, totally. Yes. But it’s just very interesting to see how VC’s are not allowed to dream or to see potential. They don’t look for potential. They do math like they look at, you know, so these are the indicators we find people who do this, this and this. So, like, you know, we applied for a Y Combinator, and people were like, oh, they’ll definitely get you, because they’re always complaining they don’t have women. What you’re doing is very much up their alley.

 

Gemma Galdon Clavell:

Well, we pitched, and they were like, oh, you don’t have a co-founder. And I’m like, what? You know, that’s.

 

Melinda Wittstock:

That’s why I’m a rules. Yeah. Like, I’m a single founder, too, as a woman. And that disqualifies me from, I don’t know, like, it doesn’t fit the pattern.

 

Gemma Galdon Clavell:

Yes, exactly.

 

Gemma Galdon Clavell:

And, but there’s also a gender aspect. Like, I’ve spoken to so many women who did things with others and ended up being stabbed on the back. You know, I’ve. I’ve tried to build things with others, and we are usually mobilized as women when things are difficult and when things get easy, people get rid of us. So, you know, I’m very aware of that. And I’ve built a great team with people that are really committed to what we do. And I invest a lot in my ability to lead them and mentor them and build something that is meaningful so that they wake up every morning motivated to work on our solutions because we are making the world a better place and we are making tech a better thing for the world.

 

Gemma Galdon Clavell:

So, I need to invest on all those things because I don’t have co-founders, but I’ve been able to create a high-performance team, highly innovative team using mobilizing this. But no one asked me. People are like, oh, you don’t have a co-founder, so you don’t feed the profile, you don’t feed the pattern, which is just crazy because that’s the story of my life. I’ve never feed the pattern, and I’ve always succeeded. So, you know, I’m going to keep on, keep on banging on doors until one that is able to see through innovation is willing to give me a chance.

 

Melinda Wittstock:

Yeah, I mean, you and I are sisters. Same thing, right? I think like most women who really innovate in tech, because there’s not, you know, there’s not necessarily a lot of role models. Paths are totally different. I mean, I arrived in this in kind of a completely different way than most, as I was always very entrepreneurial as a kid and as a teenager and stuff, I always had all these side hustles and things. But, like, I became a journalist on my college newspaper at McGill. And because I broke so many good stories, I was really good at, you know, following the money and like, just, you know, like bring big investigative scoops, you know, one made the Wall Street Journal when I was like 19 years old. Right? Wow. And so, I became a business correspondent of the Times of London, age 22, became the media correspondent where there was this new thing called the Internet.

 

Melinda Wittstock:

And I was an unusual type of journalist because I really, because of the, because of my inherent entrepreneurial nature, and I became fascinated with how do we know what’s true or what’s not? And the gradations of that, the context that, you know. Right. Because there are stenographer sort of journalists and then there’s people that are looking for truth. So, I was very much in the latter category. So, you know, first business is a media company. I discover kind of technology, get really into that fast forward, you know, it’s 2010 now, and I create a crowdsourcing platform for news content that’s not only using natural language processing and unstructured data and, like, unsupervised machine learning and, and different things, but is also layering in probabilities of what’s true or not about crowdsourced content.

 

Melinda Wittstock:

That’s really hard to do. I was doing it like, and it was impossible to raise money for that company. We had like 500,000 users, and it was working. I was always going to get better, but it was pretty good. And I think the journalism really informed my ability to even see that opportunity, like solving fake news and disinformation before anyone was aware that it was an issue because I could see the issues of the filter bubbles on social. I could see and like, you know, I just had a different perspective. So that’s a total outlier. Like when you go talk to an investor, oh, you’re a journalist, how you’re not a tech founder.

 

Melinda Wittstock:

It’s like what? Like I’ve actually created these. Excuse me.

 

Melinda Wittstock:

So I totally get what you’re saying about like, being an outlier. Like, the people who find these chocolate peanut butter connections are not the people who’ve come through the, you know, the factory, you know, the entrepreneur factory.

 

Gemma Galdon Clavell:

Yeah. And it’s just, it’s such a pity that again, we live in a, you know, in a world right now where that is difficult to see because of these rules of VC’s. So, I think that we need, we need to, we need to push for a different entrepreneurial context where outliers are allowed to thrive. Because again, like, outliers are everything. Like every genius that we, you know, that you admire, that you follow, that you’ve read about, they were all outliers. So, if we create a world of patterns, what are we doing?

 

Melinda Wittstock:

A venture fund that’s founded by, you know, outliers, called outliers, that like, where we have exits and we do well, we should plow, we should be the change we want to see and plow that back. I’m a big passionate believer of that because, because I think the venture fund or funds that really recognize that will and also around social impact and such will crush on results compared to the current, you know, the results.

 

Gemma Galdon Clavell:

Totally, totally.

 

Melinda Wittstock:

I’m so passionate about that. And like just finding the right people to make that happen. Like one of the, one of the.

 

Gemma Galdon Clavell:

Things that we find in, where we, like, we work in AI auditing. So, a lot of people say that we do compliance. I always say that we do performance, but that doesn’t matter. What’s crucial here is that there’s a lot of companies offering AI governance software, and they all do the same thing. They try to get you to the bare minimum to comply. So, you’re trying to make compliance meaningless. I want to make sure that your systems work better. When I audit your systems, I’m not just making sure that you tick a box.

 

Gemma Galdon Clavell:

I want to make sure that you are treating outliers in the way that you need to treat them for your system to be efficient. I want to make sure that you identify hallucination so that your business results are better. And that’s what drives me and everyone out there, all the competition is trying to go against that. Like, you know, I’m going to give you a certificate that you’ve done compliance. But actually, in the back end, we haven’t changed anything. We just came up with this facade of compliance. But I think that we, the people that we want to have innovating or the people that are not trying to cut corners, but the people that look at problems head on are like, I’m going to solve this. This is hard, but I want to crack it.

 

Gemma Galdon Clavell:

And unfortunately, the context that we have right now is empowering all the worst players. I would say the ones that are just, they don’t care about the problems they’re solving, and I think there’s room for them, but they cannot take the whole space. There needs to be space for true innovators and right now we need to claim that space because it’s pretty much nonexistent.

 

Melinda Wittstock:

So just before we wrap up, like, tell me some of the clients that you work with. I mean, are you working with these large language models, or, like, who are, who are some of the clients that are using your platform?

 

Gemma Galdon Clavell:

Most of our clients, historically, even when we did this as a consultancy, have been in the medical field because there’s a big awareness of the bias and the issues and the consequences of making a bad decision. But we’re seeing a lot of interest and some of our clients now come from the hiring world. So, companies that not necessarily the ones that develop the AI hiring tools, but the ones that use it. If you are using AI to make decisions on your next employees and you are choosing the worst employees because you don’t understand how the system works and you’re leaving out really good people, then that’s a problem for you. But you may have liability because of bias, but also basically you’re incorporating people who are not the best people. And, you know, we entrepreneurs, we hire people all the time. There’s so much at risk in hiring. Like, you really want to get that decision right.

 

Gemma Galdon Clavell:

And so, we get a lot of clients now from the. From large corporations that use AI to hire people and these sorts of platforms, also in the banking system, making decisions about who to give credit to. But also, and that’s fairly, pretty much across the board in client segmentation. We are seeing a lot of issues in the way that, that any company is doing client segmentations and a lot of problems, not just bias, but also efficiency in the ability to understand who your clients are. A lot of really AI models that are too simple to understand the complexity of your client base, and so you are sending the wrong products and the wrong recommendations and the wrong pitches to the wrong people. So basically, anyone who is using automation in any way is a potentially good client for us. And we are seeing a lot of interest in evaluating the performance of those systems. So, our clients understand that AI is great, and that AI is the future, but they want an AI that they can evaluate, they can monitor, and they can, and they can audit.

 

Gemma Galdon Clavell:

Our platform gives them that. It’s a subscription model. They subscribe, and basically they get statistics and data and information on how those systems are performing and whether things need to be, things need to be changed. With LLMs, we are working very much with the industry at large in getting them to understand that all the trust and safety efforts that they are making are flawed because none of them are looking at actual user, actual usage. And so all the efforts are in prompt engineering, sometimes cleaning the data. But what happens? No one’s looking at what happens when people use that information. When people use an LLM, it’s like all the efforts are focused on, you know, if we were in developing planes, like, are the, are the, are the wings of the plane properly screwed in? And like, have you followed all the regulation for screwing in the wing, but no one’s looking at, you know, what happens when the plane takes off? Does the plane stay on the air? That we actually, right now we have no visibility over that. So, our auditing platform, what do we do? Is we do user segmentation and text analysis very similar to what you did in 2010, to try to understand what the dynamics of interaction with those systems are and whether those dynamics of interaction are problematic.

 

Gemma Galdon Clavell:

And in those dynamics of interaction, we can identify hallucination, misinformation, but also bias in those interactions. So, we are currently not working with specific clients. We are working with the industry at large, through their trade associations, to build that impact layer of auditability in their procedures.

 

Melinda Wittstock:

Oh, that’s amazing. So, what’s the best way for people that, I mean, I think a lot of people need what you do. What’s the best way for people to find you and work with you?

 

Gemma Galdon Clavell:

Our website, Eticas AI, my email Gemma, my first name, at Eticus AI. I’m lucky to be speaking a lot of events, so if there’s any tech event near you, I may be in the lineup. I’m also featured quite extensively the in the media talking about AI issues. So hopefully people will keep on hearing my name now that they, now that they know it. But the best way is the website and getting in touch directly. More than happy to do demos to explain more about what we do and also to build alliances. I’m a big believer in collaboration and not competition. Like, if you are developing something that is relevant to what we do, I’d love to, I’d love to hear from folks out there and see how we can help one another.

 

Melinda Wittstock:

Oh, that’s fantastic. Well, look, I loved this conversation. I just like lifted my day. Thank you so much for putting on your wings and flying with us today.

 

Gemma Galdon Clavell:

Thank you for doing this podcast and for giving us a voice and a space.

 

[INTERVIEW ENDS]

 

Melinda Wittstock:

Gemma Galdon Clavell is the CEO and founder of Eticas.ai.

 

Melinda Wittstock:

Be sure to download Podopolo, follow Wings of Inspired Business there, create and share your favorite moments with our viral episode clip feature, and join us in the episode comments section so we can all take the conversation further with your questions and comments.

 

Melinda Wittstock:

That’s it for today’s episode. Head on over to WingsPodcast.com – and subscribe to the show. When you subscribe, you’ll instantly get my special gift, the WINGS Success Formula. Women … Innovating … Networking … Growing …Scaling … IS the WINGS of Inspired Business Formula …for daily success in your business and life. Miss a Wings episode? We’ve got hundreds in the vault, all with actionable advice and epiphanies. Check them out at MelindaWittstock.com or wingspodcast.com. You can also catch me on LinkedIn or Instagram @MelindaAnneWittstock. We also love it when you share your feedback with a 5-star rating and review on Apple, Spotify or wherever else you listen, including Podopolo where you can interact with me and share your favorite clips.

 

 

Subscribe to Wings!
 
Listen to learn the secrets, strategies, practical tips and epiphanies of women entrepreneurs who’ve “been there, built that” so you too can manifest the confidence, capital and connections to soar to success!
Instantly get Melinda’s Wings Success Formula
Review on iTunes and win the chance for a VIP Day with Melinda
Subscribe to Wings!
 
Listen to learn the secrets, strategies, practical tips and epiphanies of women entrepreneurs who’ve “been there, built that” so you too can manifest the confidence, capital and connections to soar to success!
Instantly get Melinda’s Wings Success Formula
Review on iTunes and win the chance for a VIP Day with Melinda
Subscribe to 10X Together!
Listen to learn from top entrepreneur couples how they juggle the business of love … with the love of business.
Instantly get Melinda’s Mindset Mojo Money Manifesto
Review on iTunes and win the chance for a VIP Day with Melinda
Subscribe to Wings!
 
Listen to learn the secrets, strategies, practical tips and epiphanies of women entrepreneurs who’ve “been there, built that” so you too can manifest the confidence, capital and connections to soar to success!
Instantly get Melinda’s Wings Success Formula
Review on iTunes and win the chance for a VIP Day with Melinda