Connected & Ready

Can AI and automation be humanized? with Hessie Jones

Episode Summary

Excitement around technologies like AI and automation tends to focus on the amazing things it can do. But what about the things it should or shouldn’t do? In this episode of Connected & Ready, host Gemma Milne is joined by Hessie Jones, co-founder of MyData Canada, to get under the surface of how companies might approach these technologies in terms of mission, values, and privacy to make them more human-centered. Amid the excitement about possibilities, what should companies be concerned about? How can models and sources of data reduce bias? And of course, what will the future of AI and automation look like? Microsoft Power Automate is helping organizations digitize paper processes and automate time-consuming, manual tasks. By bringing together robotic process automation, digital process automation, and AI together on a single platform, Power Automate serves the entire spectrum of an organization’s automation needs. Watch a demo today: Thank you for listening to Connected & Ready! Do you have ideas of how we can improve the show? Want to recommend a guest for us to interview? We value your partnership and participation. Please drop us a note at We would love to hear from you.

Episode Notes

Gemma Milne talks with Hessie Jones, co-founder of MyData Canada, about how companies should assess the value and purpose of AI and automation initiatives. They also explore scenarios that give individuals more control over personal information while also making data more accessible and usable, and how businesses can determine the most effective places to use AI and automation.

About Hessie Jones:

Hessie is a Privacy Technologist, Venture Partner, Strategist, Tech Journalist and Author. She is currently a Venture Partner at MATR Ventures and COO, Beacon, a social enterprise start-up focusing on privacy solutions. She has 20 years in start-up tech: data targeting, profile and behavioural analytics, AI tech and more recently data privacy and security. Hessie advocates for AI readiness, education, ethical distribution of AI and the right to self-determination and control of personal information in this era of transformation. As a seasoned digital strategist, she spent the last 18 years on the web at Yahoo! Aegis Media, in various enterprise FIs and start-ups. 

Hessie is also a contributor at Forbes and GritDaily, a Cofounding member of MyData Canada, Women in AI Ethics Collective member, Board member with Technology for Good Canada, a Cofounding Member of Education Reform Collective (to combat Anti-Black and Anti-Indigenous Racism in Canadian education) plus a technology mentor and start-up advisor.

Learn more:


Topics of discussion


Sponsor link

Learn how Microsoft Power Automate is helping organizations digitize paper processes and automate time-consuming, manual tasks – without writing a single line of code. Watch a demo today:


Contact us



Follow us on social media




Episode Transcription

Gemma [00:00:05] Hello and welcome. You're listening to Connected and Ready an ongoing conversation about innovation, resilience, and our capacity to succeed brought to you by Microsoft. I'm Gemma Milne. I'm a technology journalist and author. And I'm going to be exploring trends around how companies are adapting to a disrupted world and preparing for tomorrow. We're going to speak to the innovators who are bringing products, operations, and people together in new ways. 

Gemma [00:00:30] In today's episode I'm chatting to Hessie Jones, who's a co-founding member of MyData Canada and a venture partner with MATR Ventures. We'll be exploring the advantages and opportunities of AI in automation for organizations today and crucially, how to approach them from a more human-centered way. We'll look at what those building, managing, and employing the underlying structures behind these technologies need to consider to avoid perpetuated biases that often exist. And we talk about what it means to create a system that truly gives organizations and consumers what they actually want. 

Before we start, I want to thank all of you listeners out there. If you have a topic or a person you'd love to hear in the show, please send us an email at We're so thankful for you all, now on with the episode. 

Gemma [00:01:18] Hessie, thank you so much for coming and joining us on the show, and why don't you start by giving us a little bit of an introduction to yourself, what your role is, and what you're working on right now. 

Hessie [00:01:25] Thank you for having me, Gemma. So I used to be a marketer. I call myself an anti-marketer right now because of all the work that I'm doing in privacy. And I think everybody has a post-Covid story. My post-Covid story was I understood what was happening in the contact tracing space and the amount of innovation that was going on to try to get ahead of the virus as of last summer. And I joined a group at to actually start to develop what we call the human-centered platform that would allow people to have the right to self-determination, control of their information, but at the same time allowing the health authorities and other authorities to get the information that they need to make these critical decisions. So it's an adaptive platform that we're building that allows the formation of a data trust where data is captured within predefined boundaries that are very purpose specific and it's basically controlled by the user. So I write also a lot on the idea of privacy, but also AI and ethics. And I realized that even from a privacy perspective, there's an intersection of AI and automation and humanity that kind of gives us a glimpse of the issues that already exist and even the inaccuracies within our organizations that perpetuate a lot of these biased algorithms that could actually leave some unintended consequences in their wake and continue to actually widen the socio-economic disparity that happens today. So I'm in venture capital and I have to admit I'm a nascent a newbie in venture capital. But all my work in data has led me to a lot of the startups I've worked at in privacy and the issues when it comes to lack of diversity. I think they're all coalescing at this point. And so when it comes to representation within companies, critical decisions have to also be represented in the data. And so this is why I'm thinking everything that I've done to this state is now coming together to say the systems need to change, the practices need to change and before we cede ourselves to AI. 

Gemma [00:03:44] Let’s actually go back a step then, because I really want to get into the weeds of a lot of what you're saying around human-centered, because this is a really lovely phrase that we do hear a lot about. And I want to hear what your definition is of that, but also thinking about how that can actually be enacted upon by various different people. But before we get into that, let's zoom back a little bit and think about AI and automation as a whole, as a topic. We hear these phrases all the time. They're getting thrown around in all sorts of conversations. From your perspective, walk us through at high level what these technologies are and what it is that they can do. Why are we even talking about them? 

Hessie [00:04:19] So technologies in general will transform the nature of things. It will transform the nature of work, the workplace itself. And we're always talking about progress, how machines will be able to carry out a lot more tasks done by humans. It will complement a lot of the work that humans will do and even perform some tasks that go beyond the capabilities of humans. Actually, I watched a show this weekend, Hidden Figures, and they talked about this one woman who is an amazing mathematician. She could do everything on the board, but the minute they brought in the IBM machine, suddenly it could compute at much faster rates. So what we're trying to get to at this point is being able – enabling automation in a way that creates a lot more efficiency and more accuracy to lead to much better decision making. So when I think of automation, I also think of connecting once disparate things. So that means less work and more convenience for humans. So look at this from the perspective of trying to get from point A to point B, and the workflow from a human standpoint is, I have to get to this meeting, but it's not until next week. So the meeting has to be in my calendar so I don't miss it. I have to know what the address is. I also have to know how long it takes to get there. And I also probably need a contact number so that if anything goes wrong, I can contact the person I'm meeting with and let them know that I'm going to be late. So imagine a machine trying to interconnect all these systems that do this so that it'll connect the dots and be able to move things along so you don't have to do all that work. So all the information is already in your calendar. It notifies you when the meeting happens, it actually already fetches an Uber for you so that it picks you up at the right time so you're not late to the meeting. And on your voice command it'll be able to connect you to the contact to confirm that you're on your way. So I see innovations like this, and even at the point where it removes us from being the middleman, there's an innovation called X.AI That does this, where it schedules meetings using their assistant named Amy or Andrew, if you prefer a male. But the idea is in the future to be able to connect the Amys and Andrews of all our calendars so our meetings are automatically scheduled for all of us. That's the kind of innovation that I'm talking about, and I think it's going to improve the things that we need to improve in our system and allows us to actually do work that's more important for us. 

Ad [00:06:59] Microsoft Power Automate is helping organizations digitize paper processes and automate time-consuming, manual tasks. By bringing together robotic process automation, digital process automation, and AI together on a single platform, Power Automate serves the entire spectrum of an organization’s automation needs. Watch a demo by following the link in the episode description.

Gemma [00:07:28] So let's go to the other side of the discourse. I mean, I don't want to say there's two sides, but the other side of the discourse in terms of what people are perhaps worried about these technologies we've talked about, why they are really exciting, why they're gaining momentum, and so on and so forth. But there's now obviously a lot of discussion about various things that you touched on introduction from privacy to bias to ethics and so on, so forth. And actually, you mentioned X.AI I have a little anecdote about X.AI, which I think perhaps encapsulates a little bit some of the concerns that people have around these technologies. And, you know, I'll start this by saying I used it a long time ago. Hopefully the technology has advanced since then. But I emailed somebody who had Amy as their email assistant, and I'd heard of X.AI so when I got a response back and it had at the bottom, X.AI, I knew it was a bot, so I didn't have to worry about it. But I emailed this person and said, I'm really sorry I can't come to a meeting because I have a family funeral next week, so I'm not going to be able to make it, huge apologies, please let me know when a better time suits in the next couple of weeks. And I got a response from Amy saying “noted, can you do this day?” Now, of course, that is AI bot in some sense working in a really great way. It's understood the context. I can't make a meeting is asked me for a new date, but it hasn't said, gosh, I'm really sorry for your loss. And I remember that moment thinking, hmm, funeral’s quite common words. It's not a difficult thing for a computer to understand. You know, funeral means death. So I think it's these sorts of examples that make people very hesitant about these technologies from an emotional perspective. But of course, it opens up the discourse of all the other things, perhaps that these technologies could miss or misunderstand and therefore create some kind of result or impact on the person that's faced with it. So, yeah, I would love to hear from your perspective, how do you sort of encapsulate we've done the exciting thing about AI and automation. What's the worry that we all need to be thinking about and contending with? 

Hessie [00:09:22] Well, I want to respond to your note about Amy, because right now what we're trying to do is if businesses are going to include these types of solutions in their systems, it has to be done from the perspective of value. So what is it doing to my organization? Is it yielding a lower cost, a higher output? Do I have better customer service? Will it make an improvement on my bottom line? But the risk here is that businesses should not automatically give in to automation. There always has to be consideration for – I'm sure there's a term called Human in the Loop to audit for these types of instances because Amy is not perfect. Now, mind you, in my instance, when I went to a meeting, the woman who met with me thought Amy was coming, so she ordered an extra coffee. So I said, oh, no, Amy is not real. Amy, it seems real because I think to her the conversation between email meant that she was getting everything that she understood. So from that perspective, it does work. But I think we have to be careful and always make the assumption that automation is not always error free and systems are subject to mistakes. And that mistake could translate into what is the cost of that service? What is the reputational harm to my business potentially, and what's the opportunity cost of that error? So I think from that perspective, we have to understand the impacts. But on the other side is that if we're using data to actually make critical decisions, then that data needs to be stored somewhere, obviously. And so how are we securing this information? How much of this information is personally identifiable? And more and more businesses who collect and process and manage a lot of the data they use for automation need to understand the risks from data leakages, cyber attacks, etc. So investments and processes need to address this. But on the other side, I take a look at automation that's happening today, and I realize that computers spit out decisions from their makers. The business decides what the goal of that process should be. So decisions are typically grandfathered from systems and processes that have worked for many, many years. So automation is merely an extension of that process. What we're starting to realize is that while many of these decisions were profitable for businesses, they haven't been fair and they marginalize a lot of vulnerable populations. The very processes that have been used to benefit the business have kept a lot of populations at bay. And so I take the example of my banking experience and loan and credit adjudication had been known for this. They use a combination of credit score, but they also use location information to determine someone's credit worthiness. And as we know, in the United States and around the world, location is a proxy for income. It's also a proxy for racialized communities, because in the US, redlining has been typically used to determine credit worthiness, that they realize that specific low-income groups reside within a specific zip codes. They also realize that there is a high, disproportionate amount of, let's say, blacks living in that neighborhood. And so that will end up swaying a decision on whether or not an individual needs to be granted credit. And a lot of those are human decisions. Those are not business decisions. But the minute the human decision moves into the process, then that will automatically bias the system in itself. And as we move forward, it becomes even worse. 

Gemma [00:13:13] So let's talk a little bit about how businesses can think about how to best use these technologies, because there's benefits there. But as you said, you have to be able to work out is the cost of error worth it? And you mentioned this idea of things have been profitable but have not necessarily been fair. And you also said this idea of can businesses have to be thinking about what really has value and then for some value as profit. Right? Whereas for others, value is fairness. And I wondered if you could talk a little bit about what do you mean when you say, you know, looking at value is that a, I don't know, a soul-searching activity that a company needs to do in terms of thinking about its impact on the world? Or is it something perhaps not so airy fairy? 

Hessie [00:13:53] Value, I don't think can be actually captured in a vacuum these days because value to a business means that what is right for the business is not necessarily right for the consumer. And so they have to look at it at a higher level to understand how it's impacting my shareholders, how it's impacting my customers. But in the long term, are there going to be ramifications? And so I look at this from the perspective as me, as a marketer, I'll talk about my evolution, because this is how I think businesses should start to think about things, especially when they take in consideration the demand for privacy that's out there. So when I was a marketer, we wanted to find out more about our audiences. Way back when we didn't have a lot, we had maybe demographic income. We had demographic information, household income, and we had location. What evolved through digital was that we had this whole host of data that we now had access to that got us to think more about their intentions and who actually influenced their behavior, etc. So this idea of customer centricity was a play we wanted to figure out their needs their wants their propensities, with the ultimate goal of figuring out what they would do. So after 20 years of this data marketing experience, I now consider myself an anti-marketer because I realize that we're crossing a very, very dangerous line. And I start to question whether or not business actually needs all this data in order to make sound decisions. So the bigger question is that if we took away the data control from, let's say, these big platforms, would we still be able to make sound decisions with less data? And would we be able to at the same time minimize the chances of our data being misused? So I look at organizations like health care, but I also look at insurance because they need the data to be able to ensure that they're providing insurance for the right people. They need the data to make sure that they're talking about the right person who had that specific diagnosis. And so they need to look beyond profitability, but also realize that down the road, are these decisions going to hurt other populations and will my business suffer from it? The other part of it is, am I leaving money on the table if I don't change the way I do things and allow more access to my products and services? That's the other thing to think about. And I spoke to a friend of mine, a VP of media at Wal-Mart, and he has been doing tons of data targeting. He's been in Ad business forever. And what he said to me is that more people will demand privacy and they're willing to forego some of the data targeting and relevance in exchange for better experiences. And so if that is the tradeoff, I think that will be important. So now we're moving from a customer-centric environment to human centric. So it means moving away from a practice where we collect anything and everything about everybody and try to predict what they do to human centric, where we give people the tools, the authority to actually assert their right to the self-determination when it comes to personal data. But it's fostered with an environment that actually caters to this very principle. And so if businesses are in line with that. Can we somehow create innovation that provides benefits for both the organization as well as the individual? 

Gemma [00:17:34] What does that look like then from a sort of practical perspective for businesses? Because everything you're saying makes complete sense. I can imagine probably most people listening to this would be nodding along and going, that sounds pretty reasonable. Yeah, we want to do that. But from the perspective of governance structures, data ownership, data use and privacy of systems already exist or perhaps ones out there that businesses are not currently using. I mean, what does it mean to really lead in this area as a, I don't know, switching your provider or changing your CTO? Or is it mindset like what does that actually look like in practice? 

Hessie [00:18:06] So there's a lot of work that's being done in privacy and they're looking for ways to minimize disruption to the service, also minimize the risks of cyber attacks, because the centralization of data is the reason why a lot of this stuff happens and also the dependance on data. So there's a lot of things to think about. So I'm going to give you some of the MyData principles and then tell you about some of the technologies that are out there. So the guiding principles for MyData is actually, first and foremost, individual empowerment. All of the individuals also at the point of integration from the perspective of his DMV or his health care record or where he went to school. There is also personal control that I talked about of their own data. And the institution at the same time breeds this idea of transparency as well as accountability. Portability is going to be a huge importance so that wherever they go, whoever they see, they capture the right information within their wallet, within their phone so that they can prove status. But the most important thing is that interoperability is going to be key and the standards for interoperability are going to be governed by the bigger institutions that create standards and privacy. 

Hessie [00:19:22] Like what are the key things that countries need to know of that allow Jane Doe to move from point A to point B and has to cross a border between and like one of the exciting innovations that's been there for many years is called verifiable credentials. And I spoke with two women who work for an organization called Covid Credentials Initiative and basically the use of verifiable credentials is being looked at to actually open the economy really safely. So allow citizens to move from one destination to the next without actually disclosing their personal information, which will be in their wallet, but allows let's say we call the issuers who are the authorities to actually verify that her data is indeed correct, so that when she goes to the border, there is a tick box that says, yes, she's received a vaccination without actually disclosing where she got it, who the certifying authority was. The information is never passed from an issuer to a verifier. It's always kept with the individual. There's a certifying authority that basically says, yes, that's true or no, that's false. 

Gemma [00:20:35] This is one of the things I hear a lot about with the arguments, with you know one of the pros I guess of block chain structures. Are you talking about a block chain technology that allows that or something different, like a whole different creation of this body that is the verified – like who is that? And how do we ensure that that body is also, you know, the one that is going to be ethical, responsible, duhdaduhdada. Right? Or is the answer, you know, let's just make it all distributed and not worry about it at all?

Hessie [00:21:00] Well this technology doesn't have to sit on a block chain. The idea of any, let's say, technology working with block chain allows the movement of information without relying on one party. That's the point of block chain. The great thing about this is that the data also stays with the actual issuers. So it stays with your Department of Motor Vehicles and it stays with the health authorities. But it allows the movement within potentially a block chain system to be able to confirm that the authorities actually got the right information from the verifiers. And so, yes, it can happen. So let me give you an example. So Jane Doe has the freedom to get vaccinated locally at a pharmacy or a health clinic. Right? And then she wants to travel to New Zealand so the border authorities have the information so that Jane does not need to quarantine herself because the credentials in her wallet actually provide that history. So she'll be able to go to that concert in New Zealand, present a specific credential so she can actually hop through several identity systems while minimizing the risk that anybody actually knows where she's gone, who's vaccinated, her, who she's met, etc.. All that is already within the credentials. So it allows her to preserve a certain amount of information to preserve her privacy. But through different circumstances, she could selectly present information to other authorities that need that and it enables this interoperability between systems so that there's less friction so she can move freely within her city or outside or to even her office building. 

Gemma [00:22:46] Yeah, the nice example, I always like around this is also proving your age in a bar, for instance, you hand over your ID and instead of you giving that person your name, where you're from, your exact birth date, blah, blah, realistically, all you need to do is give them a piece of paper that says I am over the drinking age. And I suppose it's that kind of removal of the other information. But also it sounds like what you're also saying is removal of risk of these other places, having extra information that they don't need about you, that perhaps then could end up somewhere that you don't necessarily trust instead of just with that original place that took in the first place that you do trust? 

Hessie [00:23:26] That's exactly it. You're in control of your information. Think of the guy at the bar. He just needs really your age. And that's it. 

Gemma [00:23:33] Yeah. 

Hessie [00:23:34] So why does he need anything else? And you're absolutely right. Like, we don't need that in the future. 

Gemma [00:23:39] Let's talk a little bit about you mentioned data that's used to train AI models and this idea of the grandfathering, which is really nice way – I haven't heard that phrase before. This idea that there is an inheritance literally of systems, of ideas, of human biases when you're creating new models. So we obviously have spoke about how important it is that the data that's used to train these models is as best as possible, trying to reduce this bias coming through those models. But what are some of perhaps alternative data sources or maybe different kinds of models or different approaches that it could ensure these models don't have the same biases of the people creating it may have? 

Hessie [00:24:19] So I think at the basis of this is do we have to go back and scrutinize the training data to determine the levels of bias? Because in a lot of cases, that's where the problem lies. If you have a training set that's already imbalanced because it highly favors a specific group or a specific income level, et cetera, then we should be able to go back to that and say, well, let's now augment that training set to include people who've been disproportionately disadvantaged because of it. 

Hessie [00:24:56] And so what does that look like now? Yes, the results will provide a lower profitability, but over time, I think by including those sets and for some point in time, and I don't know how many corporations are willing to do this, are they willing to take a hit on their profitability in order to establish a lot of these fairness and the resistance, because I can tell you, like innovation will emerge that will actually provide credit services for those who have been less worthy in the past because of the rules that the system has dictated for a lot of years. I ran into a young man here in Canada who has created a technology that allows crowdsourcing for newcomers to the country who are trying to, let's say, save for their kids’ education. And so his technology, it's very basic, but it eventually will be automated, but it allows friends and family to actually put in money into almost like a crowdsourcing system that allows them to build up their wealth for that child. And then that money can then put into a wealth fund that will help grow while the child is growing up through the school system as well. 

Hessie [00:26:08] So he's looking for alternative ways to actually build wealth in systems where the family just couldn't qualify because they needed a certain minimum income in order to actually start building wealth, building an education fund for their kids. So there's so much demand for it. The problem is, is that no one is no one is doing it except for a few of them that are looking at these kinds of opportunities and how much profitable it can be if people just look at their systems and decide that they want to change it just a little bit. 

Gemma [00:26:43] So let's dig into that point just a little bit more. So it sounds like there's people out there creating new organizations, new businesses, new charities that start with this problem and try to solve it. And then, of course, there's also, you know, whether it's suppliers of new technologies or so on, so forth, they're providing the technical means for existing businesses to adapt systems and so on and so forth. What would be your advice for people listening who are at existing businesses, perhaps of all different kinds who are going, you know, we've got our existing systems, we have our existing business that doesn't have a central core mission of privacy or whatever else we're selling bean bags or something I don't know. How do we make sure that we are moving forward as a business in a way that is responsible, that's human centered and kind of touching on these points that we have today, what would be the I guess maybe the first steps for someone of that kind of business? What's the next step to go back and talk to someone? Do they shop around for new technologies? Tell us a little bit about what your advice would be. 

Hessie [00:27:42] If I was to put fear in people's minds, I wouldn't want to do that. But I will say legislation is coming. Legislation is coming to the point that they will start penalizing organizations that don't provide proper disclosure in their specific use of information. In Canada, we just recently launched our Consumer Privacy Protection Act in November, and that penalizes all businesses regardless of size, five percent of revenues or two hundred fifty thousand dollars, whichever is less. And that's a huge problem for small businesses who are, small businesses and startup companies who are trying to make good on things. The unfortunate part of that is that the big corporations can handle that. And we've seen that in the past. And to them, I think sometimes it's like pennies and they will absorb it and they'll move on, like for businesses, they can't afford to do that. So I think the first step is compliance. First step is making sure that you have systems in place that first of all, protect the information. But you have to take a good look at your own business and understand what are you using the data for and is it done in a way that will help your customers down the road and will it also hurt them? So those are the things that companies don't think about because they only look at the profit side of things. But I would also ask them to look at alternative ways where we could augment current data with new types of ways to make data like synthetic data. Have you heard of synthetic data? 

Gemma [00:29:20] Tell us a little bit more about it. 

Hessie [00:29:22] OK, so I've recently actually run into this in the last couple of months because I was interviewing a company out of Israel called DID short for a, D Identifcation. And the way they're looking at privacy is to be able to fool the computer. So from an image perspective, I can have a picture of Gemma in front of me and I swear that's Gemma. But through, let's say, general adversarial networks, they can actually change some pixilation within that photo so that it still looks like you. But to a computer, it does not look like you at all. 

Gemma [00:29:59] Oh, this is like glitching and things like that. You hear this sort of glitch activism in terms of trying to trick systems and whatnot with computers. 

Hessie [00:30:06] Exactly. When you look at facial recognition and you look at the use of synthetic data and different processes to minimize the recognition of somebody's face from an accuracy perspective, use synthetic data also in a way to create your database, to augment your database based on, let's say, a thousand pieces of label data. If you can do that in a way, and it's been done, to be able to build up enough of a massive data set so that you can build models out of it. It reduces the reliance on people's actual data and reduces the risk of surveillance on PII information. So from that perspective, it will be a huge market in the future. And people say, I know there's going to be an argument because I've had the same argument about synthetic data being used to bolster deep fakes. And that's absolutely true. The difference is, is that if this is going to become commonplace, then what are the mandates from government’s perspective, from a legislative perspective, to make sure that people know that it is a deep fake? But to me, that's a small price to pay for reducing the demand on real people's data and the surveillance that we see that's happening already in China. We don't want that to come here. And here's the other thing, is that for businesses, they now can create data where they never had access to it before because you know how much it costs for label data. And most of the companies that have the data are huge corporations or huge social networks that have the ability to not only capture it, but they don't necessarily share it with anybody else. So for small businesses that want to be able to improve their systems, they can use synthetic data methodologies to be able to create this information for themselves. 

Gemma [00:31:57] So bearing in mind all the things we've just been speaking about, you know, various issues around the costs of perhaps bringing in these new technologies, the worries that businesses might have doing it wrong or causing harm or so on and so forth, how do businesses determine the best places within their business to utilize these technologies? 

Hessie [00:32:14] I think businesses have to think about, first of all, where these technologies will benefit the most. And so if these systems are going to be used, to me there’s an important use in productivity, so look at their email systems, look at how they communicate to one another, look at how they collaborate, and can we hook up our email to our sales CRM, to our communication tool so that information automatically moves from one area to another without us actually doing the work. So those are the kinds of things that are going to be important. But what we have to also realize is that the more that we create these integrations, the more we have to be careful about the types of things that we're communicating between these systems and within these systems. So keep in mind an idea that if the information that we have that we're talking about could be highly confidential, then maybe we move it more towards private channels, etc., or we may move it towards like a phone call. So a lot of these have to be considered when you're actually developing the communications and the productivity tools within your system, how they're going to be used and to what extent. We are always mindful of the privacy of the information that we're communicating. 

Gemma [00:33:37] So building on this point around, you know, you sort of chosen this place or the area of business that you want to implement, some AI automation, maybe its the productivity tool or something else. What should businesses make sure they don't overlook as they begin the process of adopting or implementing these new systems? 

Hessie [00:33:54] Yeah, so the things that they have to take a look at are at the end of the day, they have to reconcile what is the objective of their company, what is the goal they're trying to get to. And by doing that, what could be potentially the fallout if we were able to get to that goal? So to me, it's like what I said before, there are always compromises when you try to do things faster, a lot more accurately. Look at the point where you could always scrutinize your system for potential errors. That's something that they have to always look at, don't see to the machine, as they say, but also scrutinize the outputs and make sure that they're doing what they're supposed to be doing and then input it back into the system in a way to make sure that you're always validating what you want versus what you end up having. So I think that's the one thing that's so important, because over time, profitability doesn't necessarily mean that it's fair. And so we have to always take a look at our customer base and determine whether or not we're over indexing and one or the other or we're starting to have behavioral changes and churn in this specific group because we weren't mindful of making a decision that impacted that group earlier on. So that's to me, it's validation for the objectives, but also the rules and the decisions you make in getting those kinds of outcomes. 

Gemma [00:35:27] It sounds almost like you're saying what you have to not overlook is getting lost in the weeds of something, but rather zooming out and really thinking about what is it you're even trying to do here? And really, what's the bigger picture of who's going to be impacted by these decisions that might seem smaller or really, really specific in the moment. But realistically, sometimes I think we can overlook that. Stepping back and just taking a second. Right?

Hessie [00:35:51] I think if you take profitability out of the picture in its own silo, then you start to realize, is there sustainability in the decisions that I'm making long term, 10 years down the road, if profitability is a great thing in the near future? And that's the unfortunate thing, is that how we are positioned in this day and age is I'm trying to get to a certain performance level for the end of the year so that I can get my bonus. 

Gemma [00:36:17] Right. 

Hessie [00:36:18] It's not the way to look at things from a business perspective. If that's the way that you that you reward your own employees, then you're not going to get to a place that creates a sustainable organization. 

Hessie [00:36:29] So I want to end on a little bit of a, shall we say, futurism and thinking, casting things forward. And, you know, we often hear about utopian visions and whatnot when we talk about technologies like AI and automation. And then we hear about the dystopian visions of how it could be terrible if we don't pay attention. I would love to hear what your vision is if businesses, people, organizations, whatever, do consider value, do consider fairness, do embed their systems in a way that is taking account of damage or potential damage and so on and so forth. What then does the future of AI and automation look like? Why is that exciting? If we can make sure we also have all this cautiousness around it, why is it still a thing to get out of bed for? 

Hessie [00:37:16] I think for me, especially in the last year, Covid has accelerated a lot of things, vulnerabilities to our systems, the rise of George Floyd, the unrelenting anger in the systems that have failed racialized groups for lots of centuries and the bias that disproportionately actually favored a lot of the very groups that have created them. What I've realized also is that what Covid’s also done is it's created this dependance on our digital systems that's also brought to light this opportunism and this surreptitious tracking of us and our children through cameras through our keyboards and all that, where a lot of people still don't have either the knowledge or the control. I think a lot of these issues are being surfaced to the point that people are now scared. Before they didn't care. So what that means is that, like I do believe there is a reckoning, it starts with companies changing the way that they do business. But if the end user, which is the consumer is fighting for proper use of their information, proper disclosure, then we could actually start to create a better narrative that reduces this “data is gold” stance and create some kind of balance. I think the future is about creating equity in these systems that never existed before. I mean, that's where we're all heading. That's why there's so much anger. So if you create equity, then you end in the midst of innovation, then the sky's the limit because everybody can start to benefit out of these systems and could start feeling like they can live their life in a way where the opportunities are endless. And that has to be the same across the board, across nations, across people, across racial boundaries. 

Gemma [00:39:07] That was a wonderful note to end on, Hessie. We're going to leave it there. Thank you so much for coming and joining us on our show and giving us a little bit of inspiration there, as well as instilling a little bit of the of the fear to which I do think is required for these kind of conversations, but really also advocating for why this isn't something we just shouldn't do because it's bad, but rather why we should do because it actually opens up opportunities and makes the world a better place at the end of the day as pithy as that might sound. So thank you so much for coming and joining us on the show. 

Hessie [00:39:34] I appreciate it, Gemma. I had a great time. Thank you. 

Gemma [00:39:39] That's it for this week. Thank you so much for tuning in. You can find out more about Hessie's work and indeed some of the broader themes discussed today in the show notes. If you enjoyed the episode, please do take a few moments to rate and review the podcast. It really helps other people discover the show. 

Gemma [00:39:54] And don't forget to subscribe and tune in next time to continue our conversation about innovation, resilience, and our capacity to succeed. 

Ad [00:40:08] Learn how Microsoft Power Automate is helping organizations digitize paper processes and automate time-consuming, manual tasks – without writing a single line of code. Watch a demo by following the link in the episode description.