PLANETcollab ep. 2 w/ Karen Smiley
Everyday AI isn’t just for tech giants. Everyday AI has to be about ethics, agency, and practical choices for families, small businesses, and communities according to our latest guest, Karen Smiley.
Welcome to our latest PLANETcollab podcast
We loved talking with Karen Smiley about all things ethical AI including her new book, Everyday Ethical AI: A Guide For Families & Small Businesses. It’s written for everyday people — families, small business owners, and communities — who want to use AI responsibly and with confidence.
Karen, an AI leader and author, shares with us her background in AI and data strategy, commitment to ethical AI, and the importance of amplifying underrepresented voices in the tech community.
Karen also discusses her insights on building the SheWritesAI community, the ethical implications of AI, and practical applications of technology in everyday life. She also emphasizes the need for personal AI use policies and the importance of agency in navigating the evolving landscape of AI.
At the end of this post, we’ve included the full transcript for you to read along.
Enjoy!
Jax & Erik
Takeaways
Karen Smiley has over 30 years of experience in AI and engineering.
She leads the SheWrites AI community to promote women and non-binary voices in AI.
Ethical AI is crucial for ensuring diverse perspectives are heard.
Karen’s podcast amplifies voices from underrepresented demographics in AI.
The PLANET Framework helps people think responsibly about AI.
AI can solve real-world problems without being overly complex.
Personal AI use policies can help individuals navigate technology ethically.
Community support is vital for emerging writers in the AI space.
Karen’s upcoming book focuses on practical AI guidance for everyday people.
The future of AI should prioritize ethical choices and individual agency.
Titles
Everyday Ethical AI: A Guide For Families & Small Businesses
Quotes
“We shouldn’t give up our agency so easily.”
“We need to have more ethical choices so that anybody could get the benefits of all these powerful tools without having to feel like they’re compromising their values. I’m not interested in shaming people for what they’re doing. I’m interested in finding ways that they can protect themselves and putting pressure on the companies to give us better ethical choices.”
“We do not inherit the earth from our ancestors. We borrow it from our children.”
Links & Resources
Featured
📖 Everyday Ethical AI: A Guide For Families & Small Businesses by Karen Smiley: Buy book
🌐 Karen Smiley’s site: everydayethicalai.com
✍️ Karen on Substack: Karen’s Substack
Communities
🤝 AI Vanguard Society (Mighty Networks) – sponsored by Michael Spencer
Projects & Tools
🛰️ Keeper Space – space situational awareness startup
🗺️ Human–AI Boundary Map – by Daria Cupareanu
📰 Stack Digest tool – by Karen Spinner
People Mentioned
Michael Spencer – AI Vanguard
Suzi Travis – AI community writer
Karolina Zieminski – “Karo (Product With Attitude)”
Jing Hu - interviewed here on Karen’s 6 ‘P’s in AI Pods (AI6P)
Frameworks
🌍 PLANET Framework – People, Language, Agency, Need, Environment, Technique (more on this soon)
Other episodes
🎙️ PLANET Collab Podcast (on Spotify)
Chapters
00:00 - Introduction to Ethical AI and Inclusion
00:46 - Karen Smiley’s Journey into AI
04:27 - Building Community and Amplifying Voices
08:05 - The Importance of Ethical AI
11:40 - Exploring the PLANET Framework
15:45 - Long-term Thinking and Future Implications
16:28 - The Realistic Benefits of AI
22:49 - Creating Personal AI Use Policies
25:04 - Staying Grounded in AI Advocacy
30:41 - Future Projects and Collaborative Efforts
32:27 - Everyday Ethical AI: A Practical Guide
Transcript
Introduction to Ethical AI and Inclusion
Jax (00:00)
Welcome everyone to the PLANET Collab Podcast. This is Jax NiCarthaigh and my collaborator, Erik Sanner, and today we have the wonderful Karen Smiley and we’re going to hear all about Karen’s work, today. Which is so exciting. Karen is a data analyst and AI expert with many, many years of experience and she’s got a book coming out, and it’s very, I won’t go into all the details, I’m sure Karen will tell us about it, but it’s very exciting. I’m really wrapt to have her here today.
If you are interested and you’re enjoying this podcast please subscribe. PLANET Podcast is based around a framework called PLANET which is something we’ve developed all about responsible AI. It’s really very simple. We’re thinking about People, we’re thinking about Language, Agency, Need, Environment, We’re thinking about the Techniques we use for AI.
Erik (00.35)
So thank you so much for joining us, Karen.
Karen (00:48)
Thank you, Eric. And Jax, do you want to say something first? Yeah.
Jax (00:51)
Yeah, I just want to say welcome, Karen. It’s really great to have you here. And I wonder as we get started, could you tell us a little bit about your background and also a little bit about how AI became part of your world? Where did these things start to pop up from?
Karen Smiley’s Journey into AI
Karen (01:07)
All right, well, I probably need to go back to when I was first going into college, when I was starting out, I wanted to build solar houses. That was my passion and how I was going to make a difference in the world. So I went to college to study mechanical engineering. And then I discovered programming in my first year. And I was just, this was so cool to be able to do something with a computer. So, but I really wanted to keep the engineering mindset of solving problems analytically.
So I looked around at all the other majors and said, which one’s got the most with data and statistics and programming? And that was, it turns out, operations research and industrial engineering. So that’s what I changed my major to. So I studied a lot of statistics, wrote a lot of software. I worked for two of my professors on a project that was analyzing ambulance data to see what factors affected patient outcomes, which I thought was a really cool thing. Like, this is the way you can make a difference.
So I’ve stayed in software really ever since then and always using computers to analyze data throughout my whole career. And then I took my first course in AI when I was working on my masters. This is back in the late 80s. So AI has been around even longer than that. It’s over, the term’s over 80 years old, but I’ve been at it for quite a long time. I switched to actually using machine learning for real when I was in corporate research at ABB. So that was about 15 years ago. And it was just a natural progression for me to take up those kinds of tools. And then I moved back into corporate leadership roles. And my most recent role was director of AI at Wind River Systems. And then when that role ended, I decided to fly solo and I started writing full-time about AI and that’s how I met Jax.
Building Community and Amplifying Voices
Jax (02:46)
Yeah, by the She Writes and wonderful She Writes list, which Karen has curated on Substack of over 500 women and non-binary writers across the world. And yet there was that was actually very interesting. That’s where we did come across each other. How did that actually come about?
Karen (03:08)
Yeah, so when I got to Substack, I started looking around and looking for other women like me who were writing about and interested in AI. And I found about 30 of them. So I was following and subscribing. And in December last year, I saw a post by someone saying, here are 10 people you should be following about AI here on Substack. So I looked at the list of 10. No women. Okay, how could you not have at least one woman on list of 10?
So I wrote to the guy, privately and very nicely asked why there weren’t any women on his list. And he’s like, I just didn’t know of any. I didn’t know this guy before I saw the post. So I thought, “well, we can fix that.” I’ve got over 30. Let me make a list. I put it out there and published it.
Then I said, you know, there’s gotta be more than this. So I just started looking around and people started suggesting to me, oh, here’s another person. And the list has just ballooned since then.
As we said, like as we’re recording this, we’re at 495 and we’ll be over 500 definitely by the time this airs. And I’m just meeting some of the neatest people that way. And it’s been really great. And we’ve started up a community. We’ve also just recently launched a new discussion area outside of Substack on Mighty Network. So I don’t know if you’ve heard of that. But it’s in the AI Vanguard Society that Michael Spencer is sponsoring and he gave us a space there. And so we’re doing some really cool things there. We actually are also working on plans for a collaborative book by some of the writers. We’re thinking of having a book where people with different perspectives on AI are contributing chapters.
One is about the AI in Africa epistemologies. And one is about the history of women in AI and why we need more women to be involved. And one’s going to be on agentic AI. And so just all these different perspectives, one on parenting and all these different aspects. It’s just people from that directory who have expressed an interest in writing a book. So I think that’s going to be really, really cool. I’m actually probably even more excited about that than my own book.
Jax (05:17)
Oh man, I often say, I don’t know how Karen, if she sleeps, because she’s Karen, you so productive. It’s amazing. Absolutely amazing.
Erik (05:30)
You know, Jax has been talking about you for months and I see why. Basically everything you’ve said resonates with me.
It’s like you see these problems, but instead of complaining, you’re doing things about them. So you wanted to build solar houses. Obviously there’s a need, right, for energy efficient housing. You saw this list and instead of being sour, you reached out to the writer first of all, to find out what happened.
And then you went way beyond remedying it, right? You didn’t just publish a list in response, but now you’ve built a community of over 500 women and non-binary people.
It’s going to have a much greater impact than any article about any list would have. So, kudos to you. What else are you working on or building or exploring or thinking about right now?
Karen (06:19)
You mentioned earlier that I have my book coming out. It’s slated for September 14th and I’m super excited about that. And what I really have been doing since I joined Substack last year and I was talking with a friend about the fact that I’m really sensitive to the ethical concerns. And a lot of people that I’ve talked to just didn’t really seem to be aware of it. Like, ‘I didn’t realize that the data was stolen’.
They’re just stealing everything. But when I’ve talked to people about the environmental impact or about the unfairness to the creators, a lot of people just don’t know. So a friend suggested that I could do something about that. So I started up this podcast series a year ago, last August, and just interviewing everybody. Because I feel like the voices of people who aren’t the eight figure US based tech bros that are building this and profiting immensely from all of these tools, their voices aren’t being heard.
Women’s voices aren’t being heard and voices of people outside of the US aren’t being heard and from different demographics that are underrepresented in AI.
I felt like, again, this is, no one can do everything, but we can all do something. And I felt like this was something that I could do that might be helpful. So I just started interviewing people who were gracious enough to share their time with me, as Jax was recently. And just finding out, know, what are you doing with AI? What aren’t you doing with AI? What will you never do with AI because you you don’t feel right about it? And how do you feel about your data being used? And it’s been so interesting. I’ve learned so much from everybody. And I feel like getting their voices out there, having all these different perspectives. My guests are from, so far, 20 different countries. And that’s pretty cool. I’m really just learning a lot about that. So I do that.
I publish an interview every week. Most of them are audio, but some are written and just had one that came out today and number 73. And that’s it. Yes. And so that’s, that’s been really exciting and fun. So I do the interviews once a week. I am working on the She Writes AI community and just writing where I can.
The Importance of Ethical AI
You know, and the book I feel like is another way that I can take it one step further as far as talking to people individually and having that awareness about AI ethics and different things that we can do is good.
I feel like maybe the book could help me reach more people. And so that’s really why I decided to go ahead and write it. I had written some articles on Substack that were well received. I thought maybe I need to package up some of these and extend them. And everything’s very researched because I’m a research-oriented person. And so that’s why I feel like the book might be a good thing. Then like Jax said, I probably don’t know how to sit still. I’m always looking for something.
But I do have one data project that I’m working on. I’m actually supporting a friend who’s launching a space situational awareness startup. And in a nutshell, we need better data to keep satellites from crashing into each other. And we’re looking at ways to get more timely data from in space and doing better analysis of that data to incorporate that space-based data. And the company is called Keeper Space. And the idea is that we need to be keepers of the space environment before it’s too late, because our society depends so much on satellites nowadays that we really can’t afford to let avoidable satellite collisions disrupt our lives. That’s a way that, beyond the writing work, that I’m keeping up with the technical side of using data and trying to do something good with it.
Jax (10:03)
Wow, Karen. And I can’t believe there isn’t something like that already. And it’s so important. I suppose we have a growing amount of satellites. So maybe it hasn’t been a consideration before, but I’m really delighted that you’re actually doing that work. That’s going to actually, it’s helping. That will help the whole society right around the world.
Exploring the PLANET Framework
The reason why Eric and I got this little podcast together was because I’ve got this habit of creating frameworks and to try and find mnemonics that help people figure out what, I believe, they need to figure out. And with AI, when I started getting involved in it a couple of years ago, it was really clear to me that people didn’t have an in. It’s just, if you say AI, I read the other day, it’s like saying, it’s like saying transport. It’s so broad a concept. You’d say, well, I’m going to get my transport and go down to the shop. And then I might get my transport and go over to Australia and visit Jax and I might get my transport… it’s doesn’t quite, it’s so big. So I wanted to create something that helped people access AI in a different way and to do it through responsible thinking and to say, look, we’ve got choices here.
So that’s where the PLANET Framework came from. And it’s just a very simple acronym. It’s People, because I think we need to think about people first. It is Language, because a lot of our AI is actually, and the way we talk about it is around language. It’s very broad. It’s around Agency, how people have agency. It’s around Need, why do we need it? What do we need it for? What might we need AI for? It was around E for Environment, thinking about how, you know, like you’re talking about the impacts of the environmental impacts. And the last one is around Technique, which is really thinking about how humans might use AI. And that’s all your work really is right across all of those pieces beautifully. I guess my question, because we, Eric and I’ve been on this adventure trying to figure out, well, what else, you know, how else might we deepen some of this thinking? Cause it’s nice to come up with a couple of letters that kind of stand for what we believe in, but how might we learn from everybody else about what might enrich this idea? I’m wondering if any of those letters, any of those elements of those might relate to you? Is there anything that speaks to you of that set of letters in that framework?
Karen (12:33)
I mean, the short answer is that they all do. I’ll try to be a little more specific about that. I mean, you called out the need and the techniques, for instance. So one thing that I would say, is you learn AI and there’s always this temptation to have, there’s a shiny new tool when you run around looking for things you can use it on. But it’s much better to actually start with what problem you’re trying to solve, then what the real need is, and then use an appropriate tool.
And in a lot of cases, the problems don’t need a high powered AI to be well solved. Sometimes you can do something very simple. If you need a picture, there may be one that’s truly public domain or available from an artist that you can use and you don’t burn any resources in generating it. So a lot of times the very simple techniques, you know, simple algorithms that make predictions can be explained and people can understand them and say, okay, yeah, I see why it made that prediction. A lot of deep neural networks are very hard to explain. How did it come up with that recommendation? We don’t really know. So a lot of times just looking at the needs and the techniques and matching them up is really important.
So I definitely resonate with those. And certainly the people and the agency aspects because the people are impacted by AI. There’s people whose voices are being overlooked, the ones outside the US, the women, the non-binary, the people that aren’t insiders.
And so a lot of what I’ve been doing has been really focusing on helping the people to have their voices heard and to have that knowledge and awareness so that they know how to protect themselves and their family and how they can use AI and data safely and wisely and confidently to know here’s something that I can do with it. And I can take advantage of the power of these tools without sacrificing my family’s privacy.
So things like that. And agency also, because one thing that we’ve talked about in my interviews on the podcast is that a lot of people I think feel like, well, AI is inevitable. It’s here. It’s a done deal. All of my books and writings have already been stolen. It’s too late. My pictures have been already stolen. But I think it’s not too late. I think that we shouldn’t give up our agency so easily. I think that there’s a lot that we can do as individuals and then collectively as well. And we shouldn’t give that up. So I’m really happy to hear that you’re talking about agency as well. That’s really something important.
Long-term Thinking and Future Implications
And of course, there’s language and environment too. So I think the environmental impact is one thing when you talk about, and this is one of the five ethical issues that I talk about in my book is the impact on the environment. And there’s also just the impact on the society as a whole, which I think maybe that fits into the environment as well. But what happens when we have massive job disruption and how do we cope with that as a society and deal with it? so it’s what one of my other interview guests, Jing Hu, called second order thinking. And I know, Jax, you’re big into systems thinking.
But it’s this idea that you don’t just look one step, you need to look farther ahead or maybe all the way out to year 2200, right?
Jax (15:53)
Yes, exactly. Love a bit of long term thinking.
The Realistic Benefits of AI
Erik (15:58)
I actually think ‘all of them’ is a terrific answer. When Jax asked you what part of the PLANET framework really resonates with you, I agree. And to me, that’s a very optimistic answer. Before we started this podcast, Jax told me that she felt that the conversation around AI was too often framed in terms of binary. They’re like, it’s awful because it steals, right? It’s horrible for the environment. It’s amazing because it’ll cure cancer. And you mentioned that there are all kinds of problems that can be solved without AI, just using simple algorithms or existing, there’s other ways that may not be as resource intensive or as criminally thievey. So can you help us understand kind of a positive to middle ground, realistic vision of, you know, what is good about AI? Not necessarily that cancer is going to be solved at the touch of a button, but just kind of a realistic down to earth, how is it positive and how can we focus? Take advantage of it.
Karen (17:12)
Yeah, I think a lot of people, when they hear AI, they think about Chat GPT and generative AI. And there’s actually, as we referred to earlier, there’s a whole lot more to it than that people have been using. We started implementing AI and machine learning systems in my previous corporate roles years ago.
And you do things like, it wasn’t AI or machine learning at the time, but things like analyzing ambulance data and seeing what comes up with a better outcome for the patients. Those are ways that analytics can change, literally change and save lives.
And we, when we were working in one of my previous jobs, I actually did finally get to touch the solar business. I was working at a company where they had a product where they sold solar power inverters. And so we worked on some analytics for the solar inverters and that was very cool. But when one of the things with the renewable energy is that it’s highly variable. You know, the cloud comes over and all of sudden this field, the power production drops. So having analytics that can keep the grid balanced and optimize it so that the power stays up is actually really important. And it has to happen so fast. There’s no way for a human to be in those loops. But you can have intelligent algorithms that can make that happen and optimize it and actually get you more power from the equipment that you have and the amount of sunlight that you have. And so it’s applications like that.
You mentioned cancer. I have a longstanding passion for lung cancer and lung cancer research. And there’s actually been a recent study, I think I wrote a very small article on Substack, about how they’re using it to diagnose lung cancer earlier. And there is no good diagnosis method right now. So this is just very exciting to see those kinds of applications.
This is AI and machine learning, but it’s not generative AI. It’s not the systems that are, you know, especially burdensome in chewing up compute cycles and requiring new large data centers to be built. I mean, generative AI has its applications as well. I mean, even looking at, you know, some of the people that I’ve talked with, they have been able to use generative AI to help them.
For instance, if they are writing, and English is not their first language and they’re trying to write an article in English. It just helps them to express ideas that really deserve to be heard. And that would be difficult for them to communicate otherwise.
And so there are applications like that where, you know, it’s not so much the tool and there are other people that are, for instance, neurodivergent. And I’ve talked to a few people that have ADHD and they’ve said that how much the genAI tool helps. They can just spill everything that they’re thinking at a hundred miles an hour into the tool and have it sort and organize it for them and help them structure their thoughts. Or it helps with executive function or with certain kinds of planning. And those kinds of applications, I think, are where the real value can come from with these applications. Where the value doesn’t come from is someone saying, ‘write me a 1,000 word article about X’, and then they just post it. That doesn’t add any value to the world, right? Because anybody could do the same thing with the same prompt. So that’s not really very useful.
Creating Personal AI Use Policies
And I do have a policy on my site about how I do and don’t use AI and why. And it’s something that I really encourage everybody to do because what’s right for me is not necessarily right for somebody else. I don’t have to use generative AI for my writing because I’m a native English speaker. I’m well educated. I got a very good grounding in grammar, from my elementary school teachers. And so for me to say, I’m not going to use it, it’s not much of a sacrifice. But for other people, it may be a very big deal.
And what’s right for me is not necessarily what’s right for somebody else. But I do think that being transparent and writing down, “this is what I use, this is what I don’t use, this is why,” is something very simple that everybody can do.
And what I’m actually structuring a workshop that I’m going be running remotely with a group in Africa next month, which is really cool and one of the things that we’re gonna do is work through how to write your own personal AI use policy. This is what I use, but it has to start from values. Like what are my values? What are the qualities that I feel are most important? And then looking at, these are the tasks that there’s another writer on the subject Daria Cupareanu. I don’t want to mispronounce her name, but she has a, basically laid out a, she called it a human AI boundary map and basically says you put your tasks into three zones.
So one of them is this sacred zone where these are things that I will never do with AI because they are important. They are human. They are things like, you know, going for a walk or having a heart to heart conversation with a person. And these are things that I will never do with AI because I value the humanness of that activity. And then there’s another zone where, you know this stuff is just annoying. It just needs to be automated. And so there’s that automation zone. And then there’s sort of this in between augmented zone where these are things where AI can help me, but I’m still the one in charge and control and it’s augmenting me. It’s not replacing me.
And so just thinking about breaking up your tasks that way. And I didn’t have that three zone framework when I created my policy last year, but I’m probably going to go ahead and think about structuring mine that way and just getting more clarity around how that maps to what I value.
And I think this is just a great exercise for anybody to do. It’s one of the things I, my book has, you know, five everyday AI and data risks and five ethical concerns and then five actions. And then this idea of writing your own AI use policy is one of the five actions that I really, I feel like it’s simple that everybody could do. And I think maybe the word policy can be scary, but a policy is just it says what you do. We all have a policy. We have ways that we make decisions about, I’m going to use AI for this, I’m not going use AI for that. But we maybe just haven’t written it down. But that is your policy. We all have one. But I think writing it down can bring a lot of clarity. And it’s also a way to be more transparent. As a writer, I feel like people that might want to read my writing should know, this is what I used, this is what I did, and this is why.
And I think the shaming part of it needs to go away. I mean, there are some times where people are being shamed for, “you used AI. Well, that’s terrible. You know, you’re a terrible person because AI is unethical and everything.” But I think any shaming needs to be on the companies that are giving us unethical tools. We need better choices.
We need to have more ethical choices in AI so that anybody could get the benefits of all these powerful tools without having to feel like they’re compromising their values. I’m not interested in shaming people for what they’re doing. I’m interested in finding ways that they can protect themselves and putting pressure on the companies to give us better ethical choices.
And some are starting to emerge and that’s just really exciting. So I always want to shine a spotlight on those when I come across them.
Staying Grounded in AI Advocacy
Jax (24:32)
Thank you so much for sharing that. You know, thinking a lot about when we’re going through so much of this change in abstract work, I suppose, when we’re trying to learn AI and seeing where it might take us to and stretching our imaginations and then trying to figure out how we, you know, do actually put pressure on the global AI organizations to actually give us those ethical tools. So there’s a lot of hard work in there and you are definitely a very, very hardworking person. Is there something that you do to stay grounded?
Karen (25:04)
That’s kind of a tough question because I probably don’t do enough of it to help center myself and to keep that balance because I feel so passionate about doing this work that it’s, I think I do struggle with that a bit. I think the one thing that I tend to come back to is that when I think about, you know, this is not quite Gen 2200, but I think about my grandkids and the kind of world that we want to leave them.
I really like the quote that we do not inherit the earth from our ancestors. We borrow it from our children. And so we have been borrowing it. And with respect to AI, we are also doing that, we are setting them, setting up what their world’s going to be like when they grow up and when they go into the job market and as they cope with aging into a different type of a generation, a different lifestyle.
I think the choices that we make now and the pressure that we exert on companies to do better are all things that can help to make their world better. Everything that we can do as far as the environmental impact is certainly something that will affect their lives.
And it’s something that I think, you know, I want them to be able look back and say, you know, Nainai* tried.
Erik (26:21)
That’s, think it’s inspiring to keep that in mind. This, this concept of borrowing the, the earth from our, our progeny, not, not getting it from our ancestors. I think that’s terrific. as, as you kind of coordinate these, you know, hundreds of other people that you’re constantly in touch with other people. Who is particularly inspiring you right now? Is there a project or a person that just comes to mind that you’ve been thinking about and just think is really cool that we should know about?
Karen (26:58)
Yeah, I don’t know that there’s one specific person that really stands out. I feel like there are so many. I’ve really been enjoying the AI community on Substack.I feel like there were some people like Michael Spencer and Suzy Travis that were very welcoming when I first joined. It’s like, wow, you don’t even know me and you’re being nice. And that was kind of the vibe that we got off to a start with last year. And that was just so inspiring and energetic to be able to have that basis and have that kind of support. And so I try to pay that forward now. But I think in general, the AI community there has been great. I love my AI, Software, & Wetware interview guests. I’ve learned something from all of them. And I always draw some inspiration from something that someone’s doing. like, I didn’t even think about that. And that’s really cool. And the SheWritesAI community is just awesome, just really enjoying interacting with them.
And I really love discovering new writers, someone who just joined and has four subscribers and is trying to figure out what they’re doing. I try to look for them and look for their writing and encourage them and give them a boost.
And it’s really been fun seeing people like Karolina Zieminski has just taken off and she’s doing great work. And she started up a free service for people.
I see so much of that where people care and people are willing to give back and to help contribute. Karen Spinner is doing this awesome stack digest tool that is currently free and we’re using that to try to help get a handle on this. I can’t read 500 newsletters. And we started doing two digest and then I think the last one was 173 articles in a week from this, the women in the SheWritesAI community. I can’t possibly read them all. I think on just trying to get them in my mailbox would be overwhelming, but having the digest reference, Oh, she wrote about this this week. I really want to go see that. And it just feels like it’s a more manageable way to do it. And Karen Spinner is doing all this for free. People like that, I think it’s inspiring and that’s seeing other people giving back and supporting the community and helping others is what really I think keeps me going.
Jax (29:18)
Substack is a, it’s a lively haven for, for people who are, who, you know, those voices that you’re talking about, not necessarily part of that, the big machine of, and the big PR machine, although there are some very well known voices in there. It’s actually sort of has a sense that, if you are interested in it, you’re knowledgeable in, you know, in whatever area you can actually come in and start talking about it or writing about it. And, then around a community like the ones that you’ve grown is actually has allowed people to have those voices.
Karen, I’ve noted some of the ones we’ve gone down and we’ll make sure that we link as much as we can in the show so people can actually look up these writers and really encourage people to have a good look at Substack and have a look at the She Writes community as well as the AI conversations that are there because they’re current, they’re vibrant and if you are listening and thinking about Substack, go and have a look, especially Karen’s work here. It’s extraordinary.
Future Projects and Collaborative Efforts
I wonder, you’ve given us such a sense of how many wonderful people are doing good work out there. And you’ve also got this whole suite of projects, Karen, that you’ve got up and going. If I can do an algorithmic prediction in my brain, in my wetware, about where you’re heading to, I would say that you have some other dream projects that you may be wanting to move towards.
Karen (30:41)
Yeah, I hinted at it earlier, the idea of having a SheWritesAI collaborative book. I think that is going to be my next project after I recover from writing this one. I think in the long-term view, my dream would be to see that we get to having a more equitable society where creators don’t get exploited and data workers aren’t mistreated and people’s privacy and autonomy is respected and we all have these powerful tools that we can use and it just helps improve life for everybody. That’s kind of the dream and I don’t know if I’ll live long enough to see it. I hope I will. I hope it comes soon. But in the meantime, I think just looking for what is the next thing I can do that I think might make a difference.
And I feel like this She Writes AI collaborative book could be that thing because it’s giving a voice. We’ve got right now 20 people penciled in to write chapters from all these different perspectives about AI. And we’ve actually talked about ethics. We’ll have a chapter on ethics. And then what we’re finding as the chapter authors are writing their chapter proposals that ethics is in more than one place. It comes up in the one on AI and writing and it comes up in another chapter on parenting. And so we may still have a chapter on ethics, but it won’t be the only place that it shows up because you know, AI is everywhere and AI ethics really needs to be considered everywhere as well. So in hindsight, I probably shouldn’t have been surprised by that. But it I think was an interesting observation as the chapter proposals started rolling in. But that, I think, will be my next project. And I’m really excited about getting that off the ground.
Everyday Ethical AI: A Guide For Families & Small Businesses
Erik (32:23)
Your book, why don’t you tell us the title and a little bit about it, please.
Karen (32:27)
Sure, it’s called Everyday Ethical AI, a guide for families and small businesses. And the idea of the book is that I really want to have a book that’s focused on everyday people, everyday life. I think a lot of the books that you see about AI, they’re focused on corporate governance and how to do technical things with it and using these different agent AI frameworks and things like that. And I feel like just more practical day-to-day advice and guidance is... has been missing. And so I felt like, writing the book would be a way to target that.
So I do have a website, it’s everydayethicalai.com. And so that is up and people can find it there easily. And it’s got some information there about it. But it’s not long, I used a larger font. Because I like to read larger font books. So it makes it more accessible, I think, for people. I’ve got a ton of references. I think it’s over 300 now. Because I want people to know that when I assert something about what’s happening with the environment or the job impact, they will then be able to see where I got that information. And this is a dynamic area. Things are going to change. So I’m making the references available in an online table where you can click and sort and find newer information and such. So I’m trying to combine the book with some resources that will help. It’s been a really fun adventure writing it. The writing is the best part. All the mechanics of publishing it are kind of a pain, but it will be worth it.
Erik (33:56)
Is there anything that you kind of want to leave our listeners with before we say goodbye and share all of your information?
Karen (34:03)
Yeah, I think what I’d like to say is to not be intimidated by it. There’s so much hype, there are so many myths and there’s, you know, people who are pushing the panic buttons and, you “get on the AI bandwagon now or you’re going to get left behind.” And it’s, I think a lot of that is that fear mongering is typically motivated. Someone’s, when someone says things like that, they have a self interest in it usually of some kind, you know, talking about AI’s inevitability, they’re probably their stock value and their worth is depending on that.
So always look for, know, if someone is proposing something or pitching something, it’s like, okay, where’s the money? How is this affecting them? And don’t let it throw you, don’t let the fear get into you because it’s, you know, those of us who have been around tech for a long time and I have, were there when the internet was first a, a new thing that started having an impact on businesses and it changed a lot of things, but in ways that we would not necessarily have expected at the very beginning.
And the same thing with mobile devices, you know? So it’s, I think there’s a lot of potential that we aren’t even able to see yet. I don’t think that this big push for AGI, general intelligence or super intelligence, ASI is what we should be focusing on. I think we should be focusing on practical applications and getting value from it and looking for ways that we can do that and do it safely and wisely and be confident in, yes, I can do this. Here’s an ethical tool I can use or here’s something that I can do that I feel good about and won’t give me problems sleeping at night because I have a way that I can do this and still get the benefits and still protect my family’s privacy.
And I think that: we shouldn’t give up our agency, to use a PLANET term. We the people should not give up our agency.
Jax (36:10)
What a wonderful way to leave us resonating on that.
So thank you so much for being with us today. We do hope that your book goes really well. I know that it really will, because people hear this and of course you want to go and get it. I would also sort of encourage our listeners to go and have a look at Karen’s work. She is phenomenal and the people that she supports are absolutely doing wonderful things.
Thank you very much. It’s been a delight and an honour.
Karen (36:37)
I appreciate you and Erik inviting me, Jax. It’s really been a lot of fun. So thank you.
Erik (36:41)
Thanks so much.
*Nainai is Chinese for “father’s mother”. This is what Karen’s grandkids call her.
©2025 Jax NiCarthaigh and Erik Sanner. All rights reserved. No part of this publication may be reproduced or distributed without the author’s prior written permission, except for brief quotations in reviews or non-commercial uses permitted by law.





