GlowUp with Shaman Isis

AI's Role in Transforming Military Training: Insights with Dr. Tim Cooley

Cynthia Elliott aka ShamanIsis

Ready to discover how artificial intelligence is reshaping the world of education and military training? Join me as I sit down with Dr. Tim Cooley, a retired Air Force officer and expert data scientist, to unravel the profound impact of AI on learning environments. Get ready to explore the fascinating world of adaptive education, where Dr. Cooley unveils the art of data storytelling and highlights the necessity of unbiased interpretation. Together, we highlight the challenges faced by younger generations inundated with information and stress the importance of maintaining data quality, especially as AI continues to evolve in critical sectors like defense.

Embark on a journey through the transformative potential of adaptive training methods, particularly within military contexts. Dr. Cooley and I delve into the promising role of AI-driven simulators in personalizing military training, increasing efficiency and effectiveness while honoring the balance between uniformity and individual talent recognition. Uncover the revolutionary impact of data analytics in optimizing training programs, and how AI is redefining not just educational practices but also battle strategies and reconnaissance operations.

As we wrap up our exploration, we navigate through cutting-edge advancements such as the ASALT marksmanship trainer and neuroscience breakthroughs, from real-time EEG technology to the science of dream visualization. While the excitement surrounding AI's rapid advancements is tangible, we also voice caution re

Send us a text

Spiritual guru, two-time #1 best-selling author, and higher consciousness advocate Shaman Isis (aka Cynthia L. Elliott) is on a mission to turn the tide of the mental and spiritual health crisis with mindfulness practices, incredible events, powerful content, and motivational storytelling that inspire your heroes journey! Learn more about her books, courses, speaking engagements, book signings, and appearances at ShamanIsis.com.

Ready for a life transformation? Ready to bring your dreams to life? Then you will want Glowup With Shaman Isis: The Collection of inspiring books and courses filled with life lessons and practices that raise your vibration and consciousness. 

Ready for a life transformation? Ready to bring your dreams to life? Then you will want Glowup With Shaman Isis: The Collection of inspiring books and courses filled with life lessons and practices that raise your vibration and consciousness. 

Support the show

GlowUp with Shaman Isis: An Edgy Podcast for Transformation and Higher Consciousness

Are you captivated by inspiring personal stories, hero’s journeys, and reflections on spirituality's place in modern life? Tune in to GlowUp with Shaman Isis, the bold and uplifting podcast by spiritual rockstar, 2x #1 best-selling author, and veteran podcaster Cynthia L. Elliott—aka Shaman Isis.

Discover more at ShamanIsis.com or SoulTechFoundation.org.

Follow her on social media at:

X / Twitter

TikTok

Facebook Page

LinkedIn

Media Kit








Speaker 1:

Well, hello, hello, hello and welcome to Citizen Journalist. I'm your host, Cynthia Elliott, also known by my spiritual avatar, shaman Isis, to those of you who read my spiritual books, and I am excited about today's episode because we're going to be talking about a different topic, something that, if you know much about my work, you know is near and dear to my heart. We're going to be talking about AI and education, and, in particular, adaptive education, and education and, in particular, adaptive education, and today I'm joined by somebody with a pretty incredible resume, dr Tim Cooley. Thank you so much and welcome to Citizen Journalist.

Speaker 2:

Well, good morning.

Speaker 1:

Well, thank you for joining us, and do you think you could just take a minute to share a little bit about your history, because it's really fascinating.

Speaker 2:

I'm sure I'm a retired Air Force officer. I spent 20 years a little over 20 years in the Air Force but I spent most of my time teaching at the Air Force Academy or in school, and so I taught in the math department there and I taught in the computer science department there and I taught in management, and then I taught a semester in econ. Kind of a long story on that one, but so I've done a lot of stuff at the Air Force Academy.

Speaker 2:

By background, I'm an operational analyst, or an OR analyst would be a better term. But I call myself a data scientist these days, because when I was going through school I was a computer science and math guy, have degrees in both, and nowadays that would be a data science degree. So and now they're? They're changing it to data engineering, I'm guessing, but but I still say data scientists.

Speaker 1:

So do a lot of analysis.

Speaker 2:

I do a lot of just manipulating data, writing short routines to manipulate data.

Speaker 1:

And here's what.

Speaker 2:

I used to tell my students that you know data tells a story and your job as an analyst is to tell that story in an unbiased method so that people can get insights and make decisions based upon what you say. So that's what I've done and right now I'm on my own company doing consulting. I work mostly for the Marine Corps, but I have worked for the FAA doing a risk analysis for North American airports due to wildlife and did some counterterrorism work back in 09, 10, somewhere in there.

Speaker 1:

So I've done a lot of different things since I retired.

Speaker 1:

Wow, it's so interesting. You know, I was curious about something when you mentioned about the data tells the story and you have to be able to interpret that data without prejudice. Do you find that the younger generations, because they were educated differently, they've been exposed to phones, they've been, they've had superpowers, if you will, superpowered information access, but I think that in many ways, might bend them a little bit. Do you find that younger generations are struggling with their own prejudices, if you will, in interpreting data, or is it not really much different?

Speaker 2:

I think we all do, and to some extent right, I mean but I do think they struggle a little bit with not being able to have the answer right away, having to work a little bit for the answer. We have so much information at our fingertips and we're all on information overload that you know I tell younger generation you've got to be able to sift through all this data and figure out what's even good data. I mean, that's the key to most everything. You've got to have good data to start with, especially in the world of AI and neural networks. And my background, my PhD, was in medical imaging and I did a system to automatically read mammograms and use neural nets to do that.

Speaker 2:

And you know the key was you had to have good data, and not all mammogram images are good, frankly, are fuzzy.

Speaker 1:

Some of them have abnormalities in them that are not cancer abnormalities like so well anyway you know, it's interesting that you talk about the data and the, the purity of the data, and, and one of my biggest concerns about artificial intelligence, uh, and I'm kind of like torn about this because I have a vested interest in it as well, but, um, I was uh working in in New York City when Google really really first came in.

Speaker 1:

It had been out for a little bit but it wasn't like a thing and it wasn't turning into the search engine that it is now and I watched it be built. Some of the first stories I worked on with the media were some of the first stories that were ever cacheted, if you will, into the system about topics that hadn't just hadn't been covered yet because it was that new. And so, paying attention, I see patterns, and one that has me really concerned is that we've taken, you know, we've got the AI large language models being built and they're scraping, sort of unceremoniously scraping everything and dumping it in. Do you see that as an issue right now, with sort of the large language models that are being built and you know what they're're able to, are they repeating bad information? Is that something we should be as concerned about as as as I think we should. I don't know, maybe I'm just being sensitive. I think so I.

Speaker 2:

I maintain that one of the big security issues for for the united states as a country is going to be somebody injecting bad data into whatever ai that we choose to use in defense. Um, and you know. So that goes into the cyber realm, of course, and getting into a data stream. But still look, the machine is only as good as the data you're going to feed it. Obviously there are better algorithms to train than others and so forth, but the bottom line comes down and if you train in a bunch of garbage data, you're going to get garbage out, and that's been known in computer science. For you know what? 80 years.

Speaker 1:

Yeah, I. The reason I ask is that, as a communications pioneer, I introduced some concepts into the lexicon, if you will, and I realized that I was able to influence the and I loved it. People were arguing with me about this because they were like, no, you can't, because the data is pure. You don't control the data. And I'm like but I can control where you're getting the, the information where you're getting it from, and I had realized that I was able to influence what it was repeating based on putting that information where it could be picked up.

Speaker 1:

And I was like you know, I hesitated to even share it with people because I was like well, people are going to think. Well, first of all, people said that's not possible and I was like you're talking to a communications professional. But I'm telling you yeah, you can actually insert bad data into it. Because my thing was like I can insert good data with good stories that are accurate to corporations' real story and help them develop their reputation within AI instead of them having, in six years, realizing they have a bad reputation in AI because nobody ever took command of that. And then I'm like, wait a minute, that could actually be turned into something bad. Do you think that's something that people should be aware of?

Speaker 2:

Oh, absolutely. I mean, let's look at this If you've got a big corporation and let's just say they did a quarterly meeting and they had a couple of bad things that they had to announce, if the media keys on those things and you get 10 stories on those two topics at that quarterly meeting, compared to one story on, you know the 10 good things are happening. You know business is up, blah, blah, blah. You know, maybe the bad thing was they had to do a one-time write-off or something. What happens to the people on the, the average person, the average consumer? They hear the bad stuff because they get 10 times more of it than the good.

Speaker 2:

And it's no different than you know anything else we, we key on what we hear the most of.

Speaker 1:

I think people are too. They're ascribing a level of purity to what they're experiencing with AI. That is, I think, kind of dangerous, because they're seeing it as something so sophisticated that it's serving up to you accuracy. Now, granted, there are people who are like, complete pessimists or you know, they just tend to be more, they scrutinize things more, but I do think people are kind of gifting AI with this clarity of information and having some kind of like discernment. Do you think that's true?

Speaker 2:

I think they are. I mean, you know, if AI produces it, it must be true, or the converse of that is, if it's on the web, it must be true. Well, we all know that's not true. So it depends on what your AI is being used for. Certainly some of the AI in medicine, where the data is scrupulously, um you know, looked at and cleaned, and everything else. Some of those answers are good and I would trust them.

Speaker 2:

But if you're talking about general ai, that's out there looking at um, you know, scraping the internet and whatever I I'd be leery. Um and again, it depends on who's using the AI to talk about various different things climate change or whatever I think you can get. You certainly can get different answers based on the data you use.

Speaker 1:

Yeah, you're using AI in an interesting way. I say using you might have a better terminology for it of you who are listening. One of the topics that I definitely wanted to touch on today was your use of AI for training and adaptive training, adaptive education, and I find that really interesting. Can you explain what that is to our listeners? Sure.

Speaker 2:

So for most parts of education there's a thing called the military calls it a program of instruction. There's various different other terms for it, but the program of instruction is essentially you start at point A and you complete these tasks and you end at point X or Z or wherever you end and everyone graduates at the same time or completes at the same time.

Speaker 2:

The problem with that and I'll use an example of we have simulator driver trainers in the military, so there's a lot of people out there that have driven a big truck, and so we have simulators to help them understand and learn how to do this before we stick them out in a road in a big truck and they panic and everything else. Some people come in. They grew up on a farm in Kansas or Nebraska and they've driven big trucks since they were 12. And this is no big deal to them. Others come in from like New York City, where you are, or Chicago or you know, in the in the city, where they may or never even drive a car, they may not have a driver's license, and so, because we have vastly different experiences, why do we have to all go through the same program, instruction? And so what we've proposed and written a couple of papers on is this idea of adaptive. So I started at the beginning and, oh, I'm really quick at this, good at this, and the educate or the learning algorithm moves me ahead quicker than somebody who isn't good at that. And not only that the ai looks at things that I don't do well. So, for example, backing a tractor trailer is a little bit tricky. Um, maybe I have problems back in tractor trailer, so it develops the scenarios so that I see more backing up than I do driving forward. Maybe I have trouble making left-hand turns. I cut the corner too quick, so I'm going to see more left-hand turns in my scenarios than the guy sitting next to me because he can do that really well. So that's our whole thought of doing that.

Speaker 2:

So what's the benefit to that? You know why do we want to do this? Well, the benefit is we get some people through training quicker, and training is expensive. I mean, you're paying somebody to be in training, so you're not getting actual benefit from them. So you're paying their salary, but not only that, you're paying all the training mechanisms that go along with that. So if you can get people through training and be as well-trained that's the key, they have to be as well-tra trained Then hey, let's do that, and not only that. Maybe you can give them some advanced things. Maybe, you know, we can give them some advanced scenarios that they wouldn't normally get, just to help them get better. Do you see that?

Speaker 1:

No, I love that. I think that's. I think that's the AI is definitely going to revolutionize education and giving you know a partner, if you will, an AI partner to kids to help them with their learning, and alleviating teachers a lot of responsibility, and I love the idea of this in the military. I understand the need in the military for sameness, but I also think that oftentimes being able to hone in on what somebody is gifted at or with their true potential could probably be much easier to see if we're training them utilizing advanced techniques like that, where we're not saming everybody I mean they're getting the same fundamental education but we're able to see more and influence more in that process. Can you share a little bit on that, the idea that you know it goes beyond just trucks and it's everything I think we have to break training into two different things.

Speaker 2:

There's initial training some people sometimes call tech training or whatever but somebody comes out of basic training or some intro program and they go into what's going to be their specialty. We send them to a school and in those cases I'm not so sure. Sameness is as important. The idea behind sameness is that if you're a unit, you go through the same experiences together. It helps in bonding and so forth. But when you're in tech school, that's not your unit. You're going to ship out. You know there's 40 people in tech school. They're going to. You know 35 different places.

Speaker 2:

Um, so it's a whole different ball game, in my opinion, to do that intro, that entry level training, that tech school training, and in there I think, hey, the quicker we can get people through, the quicker we get them into. You know, the real forces would quicker be able to give them advanced training, quicker we'll get them into a unit where they can bond and blah, blah, blah. So I agree the sameness training is certainly a big thing, but typically it's when you're in a unit together.

Speaker 1:

It's interesting. You're going to be able to see a lot more data on your soldiers and on their experience. You're probably already seeing a ton of data as a as a data scientist or data engineer. Can you share a little bit on that? Like, is that going to help you better direct and? You know organize, reassess, you know realign, create new programs?

Speaker 2:

Yes, so I don't think I'm saying anything that isn't published, because I believe all this is published. The Marine Corps, in particular, is going to a new marksmanship trainer called ASALT, and I can't remember what the acronym stands for, but the bottom line is it's a much more immersive trainer than the current one that they use today. It allows us to gather an order of magnitude more data, and not just marksmanship, like you know. Okay, they hit the target. You know, they hit the target within a certain amount of radius, blah, blah, blah. It allows us to get biometric data.

Speaker 1:

Oh wow, that's so cool biometric data.

Speaker 2:

So, wow, that's so cool. Yeah, heart rate Um, I believe it has, or is planned to have, a option for eye tracking so you can see where the person's looking. Um, it measures how quick they get on target, how quick they can they can find a target and how quick they can they can zero in on it. So it gives us a lot more data. And so what does that do? Well, heart rate's important and heart rate variability is also important. But some of these things that allow us now to tell how quick we can now look at your marksmanship score and say, okay, you're not hitting well. Well, you're taking too long to zoom in or you're doing it too quickly, You're not waiting long enough. So there's a whole bunch of nuances now that we can train on and get really down into some really interesting things to make them better.

Speaker 2:

Better Marines I do know Well, and there's a guy out in California I think it's it's San Diego, might be LA he's a neuroscientist. I ran across him at a conference. He has developed essentially an EEG, real-time EEG. It's a membrane that fits inside a helmet where they can now look at cognitive overload or cognitive load I guess not overload necessarily of a soldier in training. They probably wouldn't want to use it on the battlefield, but they can do these. They're wireless and they can gather all sorts of eeg data um, which is fascinating.

Speaker 1:

I haven't actually caught up with them for a year or so, but just fascinating you know, I find that, uh, I got on this tangent last year where I was like you know, oh, I can create a mood booth where I'm able to use biofeedback and and the senses, uh, the human senses to be able to bring people more into energetic alignment, because there's a vibration, because everything is energy, there's a vibration. That is ideal and you know, I got kind of on this kick about it. I was making it really complicated. Then, the more research I did, I was like, oh, they're already there, they just have, you know, the elements are already there. So it's kind of like using that same sort of idea that you can take the biofeedback that you can eat.

Speaker 1:

Honestly, the way we're I'm sorry I'm bouncing all over the place I get really excited about this, but I just find it so fascinating the fact that they were able to just recently share dreams between two human beings while they're in sleep and the fact that we can plant chips in people's brains and visualize what they're thinking. I mean, that's so fascinating to me. Do you find AI to be a little scary? Or how fast it's moving to be a little scary, yeah.

Speaker 2:

I mean it can be, and I haven't kept up with their attempts to regulate it. But there's certainly a bunch of different places, countries and organizations that are trying to regulate AI and I haven't kept up with it. I need to. I typically don't get into the legal aspects of all these things, but I think it's like anything else, right? Um, there are, there are uses that are certainly toward the good and there are uses that are not, and any, any technology, pretty much, can be used in a good way and it can be used nefariously.

Speaker 1:

So uh, with adaptive education, um, how do you see that evolve? I know you have to be careful about what you say because you want to make sure we're protecting potential. What do you see those impacts? Because you've talked about trucks, where else do you see the adaptive education really coming in handy in training or evolving in handy?

Speaker 2:

in training or evolving. So I'm in the simulation world right now, pretty much modeling and simulation and looking at the data that comes out of that. We have in the military, in the DOD, a number of, and we call them I mean, people call them games. I suppose they're simulations of battles or what would happen if we did X, y, z and we run these things, you know, 10,000 times before we ever do a battle plan.

Speaker 2:

And the thing that AI can do, the thing AI does really really well, is it identifies patterns and it picks up on nuances that perhaps people could pick up on. It's just that the AI picks up on an order or two of magnitude faster. So when we do these simulations and then we do the analysis of what worked, what didn't, I think when you start putting AI in those where it can now look at and and go through these 10, 000, 20, 000, 50, 000 iterations and, within probably minutes or or even seconds maybe, give you some, some answers on some nuances that happen oh well, did you see that this happened? Or maybe you need to do this little tweak in your battle plan and it'll make it more effective. I think those are going to be game changers, so to speak not to be a pun.

Speaker 2:

And so I think there's other uses for AI, certainly in the aerial battle space. My background is Air Force. We are going to more unmanned vehicles in the air for a number of different reasons, and I think you're going to see a lot more AI involved in some of those, particularly in target recognition and reconnaissance, those kinds of things where it can pick up again small patterns. Is this a true target or not?

Speaker 1:

That's interesting to me that we're I mean we're definitely headed towards it's more machine, machine war, warfare war warfare, technology warfare than human, human warfare. I mean, you know, I think you know what I mean the human being being face-to-face with the enemy, which is kind of terrifying in a way. It is we.

Speaker 2:

I just presented a paper last May at a conference called ModSim on man-machine interface and how do you try to measure the effectiveness of those? And hopefully we'll be able to get some testing out in a test range down in Texas. But I don't believe we will ever go to a battlefield. That's machine-machine only. I think we'll have machines being guided by humans. But again, this is where somebody nefariously could go. No, I'm gonna let the machines go, and that could be pretty scary yeah, yeah, it really is the future.

Speaker 1:

Is this incredibly exciting time? Like I'm thrilled that I'm alive to see the, the things that are happening, because they are the science fiction and our science fiction writers were futurists. They always were, and they were predicting the potentials for the future, even though they were treated like fantasy writers and they were in many ways. But so many of the things that were in the television that I grew up seeing you know, the Jetsons, that kind of stuff have all come true and it's incredibly exciting. I think it will bring in an abundance into the world. It will also bring in moments of us having to ask ourselves some pretty hard questions.

Speaker 1:

Just, you know, knowing human nature and what people are, you know, I've seen this repeatedly in my lifetime where the scientist or the researcher gifts the creation with some morality that they themselves have, but they don't ask themselves if that is actually going to be an actual scenario or it's going to get out of control. It certainly presents a vivid picture of the future. What do you think in terms of artificial intelligence? I mean, you've been, you've done so much research and data scientists and, like your background is so fascinating, what do you think are going to be sort of the biggest implications in the next, say, five to 10 years, because of where we stand with AI right now.

Speaker 2:

It's actually really interesting. So some of the implications are going to be what happens to AI in the financial markets. Obviously, you know we are we are now in a global market. There's no question of that, whereas 80 years ago we were not necessarily no question of that, whereas 80 years ago we were not necessarily. Um, but everything that that countries do at least the larger ones, larger economies affects the other countries. You know, what china does affects the us, what the us does affects china. You know the europe and so forth.

Speaker 2:

And the question is what is going to happen with ai in the financial markets, financial markets are unpredictable, but I'm sure there are people right now almost guaranteed that are trying to use AI to get an edge on the stock market and investments and so forth.

Speaker 2:

And I think you know, I don't know where that's going to end up going, but I think that's going to have some major implications. And is that going to involve our digital currency, the Bitcoin world and all the other ones? What's going to happen with that? I think that could also play a major role. You know, is there, will there be, I should say, AI, where people are manipulated that don't even know they're being manipulated.

Speaker 1:

Yeah, yeah.

Speaker 2:

I think, as we talked, before.

Speaker 1:

I see that with algorithms all the time. I did a podcast last night and somebody was saying that this one social media platform was a cesspool and I was like it gives you what you keep engaging with.

Speaker 2:

There you go.

Speaker 1:

If you think it's a cesspool, perhaps you should ask yourself what you're engaging with. It's, yeah, yeah, the financial markets. That's really interesting. Of course education is going to undergo.

Speaker 1:

I mean, and frankly I've been really frustrated with our education system since the 70s, when I was a kid and I remember sitting in class, going I'm supposed to sit here and take this long to do one book, when I can read a book in a day, like you want me to actually drag it out for a few weeks, and it was a nightmare and I couldn't sit still and it was like I was like, well, this doesn't make sense to me. Why does everybody have to go about it the same way? Why am I not being allowed to work at my own pace? And so I think it's going to be really great for different paces. It's not to say that my pace was better or worse, it was just different.

Speaker 1:

And I think one of the biggest drawbacks to our education system, aside from the fact that our teachers are underpaid and underappreciated because they're mostly women let's be honest, if that was majority men, we would have been paying them, but we like to put the emotional and mental and labor things that we undervalue under women because of our history. So I think giving them tools like artificial intelligence, gifting the kids with that and being able to pace education to the student is going to be huge, do you agree? Pace education to the student is going to be huge, do you agree?

Speaker 2:

Yeah, I'm, I think it will be, I think the other, to me the interesting thing is, I think, our assessment methods are gonna have to change. So, for example, I AI in. Let's just go into the soft sciences or social sciences where people are doing research papers, writing papers and so forth. Maybe not even research papers, maybe just philosophical papers. And now you can go into ChatGPT and get one, probably 80 to 90% written, or maybe 100% written and all of a sudden this makes a whole different world.

Speaker 2:

because now I assign a paper and if someone can go into chat gpt and crank it out in you know 10 minutes or less, um, how do I know that they've actually learned anything or done any thought with it? Well, here's a thought, um, and we actually played with this at the air force academy a little bit. Um, here's a thought um, and we actually played with this at the air force academy a little bit um, if you really want to know what, someone knows, you talk to them, you give them an award.

Speaker 2:

So I envision that someone's going to sign a paper on a topic. Kids will go to chat GPT and we may welcome that for that matter. You know, to kind of figure out, I mean, they still have to put some thought into what the question is and how to design it perhaps. But then everybody gets one on one a 15 minute three question quiz to show that they might understand it. And if they just did the paper and chat at GPT and handed it in, they're going to fail that quiz.

Speaker 2:

We utilize this at the Air Force Academy in group projects where we had three or four kids in a group and you know the temptation is that one guy does everything or one gal does everything because A they might know it better or B you know they might have more time you know who knows, but they end up doing everything, and so now you bring them in one by one, and if they didn't actually do something or actually know something about the topic, it was pretty obvious that that's the first question or actually know something about the topic?

Speaker 1:

it was pretty obvious that that's the first question. Yeah, yeah, you know it's interesting. I wrote. I say I wrote.

Speaker 1:

I've done three books. The first two were conscious living books, mostly spirituality, but I got really concerned. I have a foundation to teach AI skills to underserved communities, because I believe that they're afraid of AI and they're not racing after it. Like you know, people people who who deal with technology tend to forget that the majority of people are not out there using artificial intelligence and that for them it's actually kind of scary. And so for me it was like let me help bridge them, because there's 120 million transitioning adult workers that are that are going to get left behind, and I watched this happen when the tech boom happened in the early 2000s. A lot of people got hurt that could have been really talented or good at things.

Speaker 1:

But anyway, I did a book called A New American Dream and I say I did it instead of saying I wrote it, although a lot of it is written by me. But it's because I've watched the American dream disintegrate and I call it dead in the book. It's not 100% dead, but it has gotten the crap beaten out of it. I've watched the jobs be shipped away, the companies go global, the politicians get paid off, the manufacturing die or steelmaking die, Like all of these powerhouse things that we did as America. We were a maker of things, not just a consumer of things, and we've become a country that is a consumer of things, and that is very dangerous, especially as we move forward, and I felt like that was. I did a video two years ago about this, about my concern about the American dream dying and that we were entering times when the politicians just did, they were not caring, and the corporations stopped caring. They stopped taking care of their employees, they stopped giving them benefits, they started we went into the gig economy and it's really, really hurt us to a place where we find ourselves right now where things have gone up 50 like we're really in a. We're in this moment where america has we're at a crossroads and we can either go left and continue on the road that we're on right now and watch us become increasingly dependent on china and other places like that for our goods and continue to be independent, because if you're not making anything or not a whole lot, you're nothing but a pariah in many ways, and I really want to see us to be a very strong country with our own industries that are empowering the world.

Speaker 1:

And so I realized one day that we'd entered the fourth industrial revolution and I was like, wait a minute. Usually, when industrial revolutions happen, we all sort of stand by and watch the corporations or the entrepreneurs use it to their own good. We don't actually sit back and say, wait a second, if we've entered the fourth industrial revolution, let's look at this for a second and say how do we, instead of watching it happen, how do we use this to empower the country? And so I wrote the book. But a huge majority of the book is through AI because I was asking it. I mean, it's hundreds of questions that I had to ask it to get these answers. But I was asking it how do we solve this problem? Small problem, like the smaller problems, but they're collectively adding up to a lot of problems.

Speaker 1:

And so that book is a lot of it is artificial intelligence and I was very you know like upfront with that was. I didn't want people just to think they were reading you know a narrative or you know what I'm saying, and I think that's the that kind of frankness and honesty and the use of it in that way is is very helpful. But yeah, our younger I just went off on a rant but our younger generations, definitely, we have to be careful that their skills aren't superficial, which I think is what a lot of that is, the fact that most of these 20-year-olds, or 25, 30-year-olds, have grown up with a computer in their hands. I think it's one of the reasons why many of them are having mental and emotional issues, because their education is so superficial. When all you have to do is go Google something to get an answer to it, you don't actually understand the answer, you didn't go through the process to earn the answer, and that's dangerous.

Speaker 2:

Sorry, I just dumped a whole bunch of info on you. What are?

Speaker 1:

your thoughts on any of that. Did any of that stuff just like send up a, like a?

Speaker 2:

hmm, that's interesting Well so you know, as I said, ai is kind of analogous to the calculator issue when in math back in the 80s and 90s and you know you couldn't use them on exams, because that was cheating. And then we, oh, we'll use them on exams, but you can't use programmable ones. And so about 2000, 2001,. I was teaching at the Air Force Academy and the guy I was was the department head at the time acting department head actually said you know, I think we need to look at how we're going to teach math like 10 years from now.

Speaker 2:

What does this look like? How do we take? And at that point in time, the academy had just started on their laptop. Every kid got a laptop instead of a desktop so they could bring them into the classroom. How do we take advantage of this? And so there were about six of us on a committee, a group that sat down and started to think about this, and then we said you know, we think we ought to start moving this way now, and so we started redesigning our courses. There are, first, like five math courses. We totally redesigned them, and the idea being is that we care about an answer and we care about you. Know you perhaps knowing how to get that answer, but not as much as we used to. We now much more care. Do you understand what's underneath that?

Speaker 2:

And so we started having them write one, one and a half page little papers on on a topic or a problem. Here's your problem. How do you solve it and how can you apply this to another problem? And the kids hated it.

Speaker 1:

Of course they did absolutely hated it.

Speaker 2:

Um, because, well, this is a math class on english class, blah blah. But we did find that if you gave them the same questions on an exam that you gave them five years prior, they would score much better because they actually understood the material. They they weren't learning a recipe.

Speaker 1:

Yeah, yeah, yeah, I think that's brilliant.

Speaker 2:

Well, I think that's going to be the same thing. You know, with AI, we're going to have to be able to assess if they understand the material, and not that they can get an answer.

Speaker 1:

Yeah, it's so true. You just hit on something when you're conversing with somebody like I can. You know, as you get older, you these things just sort of become natural. But you can figure out pretty quickly Like I've been stunned a few times on my podcast when I've been, when I've interviewed somebody with this incredible resume and then I go to ask them questions and I find that that I'm like who actually accomplished a lot of these things? Because the answer is not deep, it goes like that low and then they get really lost. It kind of reminds me of some of the interviews on TV right now. I won't name any names.

Speaker 1:

If you can't speak to the issue or how to solve the issue, it's because you don't actually really understand. It's one thing to be nervous when you speak, but if you understand a problem and you're given the time to answer it at least enough time to answer it, at least you know enough time to catch your breath, give your thoughts together and respond. If you can't do that, you don't actually understand the problem. If you can't verbalize it, then I'm not sure what good it does, unless you sit in a room solving issues all day with nobody else engaging with you. It's really fascinating. I'm curious about one thing. Uh, one last question for you. It may be off the cuff, may not be something that's really up your up your alley. What are your thoughts on bricks? Slower, what are your thoughts on the bricks? Uh, there's a. Uh, several countries that have come together to create a.

Speaker 1:

They're trying to, uh, lessen the power of the dollar oh okay, I wasn't quite sure what yeah, yeah, sorry, I'm probably, I'm probably saying it with my southern accent, uh, but they've come together to try to lessen the power of the dollar and I think that's been our edge for a really long time as a country. Do you have any thoughts on that? Just as a data scientist?

Speaker 2:

I mean, there are some people that say our edge went away when we went off the gold standard. I've heard that in this whole I've heard this a couple our edge went away when we went off the gold standard.

Speaker 1:

That's it. I've heard that in this whole.

Speaker 2:

I've heard this a couple of times in my research. Yeah, so there appears to say that I don't know if I quite agree with that, but I do agree that if the dollar becomes like not the standard, then US has lost a bunch of their power, which I think is the next big problem.

Speaker 2:

Well, it is, and not to get into a long discussion and everything else, but if we don't somehow rein in our debt, we're going to lose the power of the dollar. Somehow rein in our debt, we're going to lose the power of the dollar Because we're going to have to print so many to pay off the debt that the dollar gets devalued. The dollar gets devalued and then the idea of a threat against the dollar as the global currency is going to become very, very real.

Speaker 1:

I agree with that. I think you just encapsulated in a couple sentences that that the bricks is a. You know those, those countries, and they've wanted us on our knees for years. They've hated the fact that we were, that we were the superpower for a really long time. You know china and russia and I forget who else is part of that that group that's trying to go after the dollar.

Speaker 1:

But, uh, I feel like we need our brightest brains really focused on how to offset that or prevent it if it's possible and I'm just terrible at me, but you know, hey, it's the truth. But also, you know, addressing our debt. You know the amount of waste. The amount of waste in this country is absolutely breathtaking and the amount of we can't. You know and this is my personal opinion, I'm not projecting this on Dr Cooley but we can't keep sending untold amounts of money to countries to be that are that, I'm sorry, I personally find very difficult to believe are actually ending up as part of, like, really addressing some dire need.

Speaker 1:

I think the majority of that money ends up in the pockets of people who are already in power, or even in the enemy's pockets, and unless we can trace and figure out where it's going. You know what are we doing? Sending money we don't have to foreign countries when we can't even pay our own debt Like it's insanity. I mean, if, as a human being, if I was doing that, if I was giving away all of my money while accumulating massive amounts of debt, while watching my value a lifetime value die, people would say I've got major issues. It's crazy as a country that we're doing that and we're not like perhaps there's a problem here. Anyway, it's interesting Any last thoughts on AI and adaptive education that you think are worth sharing before we close out.

Speaker 2:

I mean, I think we've covered, covered it but I do think that um, at least in some areas maybe not in education as much as training um, the word, the world of adaptive training is going to be the way, the future, I think, because I just think it's got so much value to it. Um, and you know, reducing training time and getting the same product out, I mean, that's always the caveat, right. You don't want to lessen your product but if you can reduce the training time, then that makes some good sense.

Speaker 1:

So thank you for stopping. No, thank you, dr Cooley, you've just actually made me realize that that's actually got implications for the workforce, because I was always capable of producing like four times as much as the other people I worked with and it used to irritate me that I wasn't getting compensated more. But that kind of an interesting future thought there. But anyway, thank you, dr Cooley. If people wanted to learn more about your work and the consulting that you do, where would they go?

Speaker 2:

I do have a website that's being redone, so I don't know if it's going to be up, but it's under wwwdynamicsdyn. As in Nancy A-M, as in Michael xorg.

Speaker 1:

Awesome, awesome, wow. Thank you so much for joining us today and thank you so much to our listeners. You've been hearing Dr Tim Cooley and citizen journalist host Cynthia Elliott talking about artificial intelligence and the military and adaptive education today. If you have not already subscribed to the show, please subscribe. It's intelligent listening. The show is really growing very quickly and we have some wonderful guests on. If you are curious about the work that I do, you can go to shamanisiscomcom and you can find my three books there Unleash the Empress, memory Mansion and A New American Dream Conscious AI for a Future Full of Promise. You'll find my three books there. You can also visit thesoultechfoundationorg to learn about the work that we do to help educate underserved communities in AI skills and literacy so that they can thrive in the age of AI. Thank you so much again for listening and joining us today. You guys have an amazing week. Bye.

People on this episode

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

Billy Dees Podcast Artwork

Billy Dees Podcast

Perfect Media Productions, LLC