Citizen Journalist

Navigating Geopolitics and Big Tech: Irina Tsukerman on AI, Education & Cybersecurity

Cynthia Elliott aka ShamanIsis Season 1 Episode 10

Can major tech giants like Google and Apple be considered modern-day quasi-governmental entities? Join us on Citizen Journalist for an eye-opening discussion with national security lawyer and geopolitical analyst Irina Tsukerman. Irina unravels the intricate dynamics between global politics and big tech, shedding light on how these corporations navigate complex regulatory landscapes, such as those in China and the European Union. Our conversation also ventures into the burgeoning AI race, highlighting its monumental implications for both technology and society at large.

We then pivot to scrutinize the efficacy of our education system in preparing individuals for the AI-driven future, asking whether the baton should be passed to the government or the private sector. The ethical quandaries and legal ramifications of AI, especially in creative fields like art and music, are brought to the forefront. Finally, we tackle the escalating cybersecurity challenges posed by sophisticated scams, emphasizing the crucial need for human connection and critical thinking in an increasingly digital world. Don't miss out on this thought-provoking episode that decodes the powerful and multifaceted role technology plays in shaping our modern geopolitics and society.

Learn more about Irina Tsukerman at www.ScarabRising.com.
Learn more about Cynthia L. Elliott aka Shaman Isis

Enjoy the podcast teaser from Citizen Journalist 

Support the show

Welcome to Citizen Journalist, the breaking news show hosted by author and futurist Cynthia L. Elliott, aka Shaman Isis. The show features breaking news and agenda-less analysis on important issues in politics, wellness, tech, etc., that impact the human experience. Our mission is to bring positive change to humanity through balanced and truthful interviews, commentary, and news coverage.

We can heal and move forward prepared for a healthier future through the truth. Inspired by the (often) lost art of journalism, we aim to bring the issues that matter to the top of the conversation. Citizen Journalist is hosted by marketing pioneer and two-time #1 best-selling author Cynthia L. Elliott, who also goes by Shaman Isis.

Elevating human consciousness through facts and solutions for a better future for all makes Citizen Journalist unique.

https://shamanisis.com/
https://soultechfoundation.org/
https://nationalinstituteforethicsinai.com/

Speaker 1:

Hi and welcome to Citizen Journalist. I'm your host, shaman Isis, also known as Cynthia Elliott. To my friends and family and fans of my books, I write in both names and you guys are welcome to the show. We're going to be talking about international politics and big tech and how those two things weave together international politics and big tech and how those two things weave together with one of the most interesting women that I have interviewed in ages, irina Zuckerman. Welcome to the show. Can you tell our listeners in general what it is that you do?

Speaker 2:

Thank you so much, cindy. I really appreciate the invitation. Well, other than being an international woman of mystery, I'm a national security lawyer and geopolitical analyst exactly the way people imagine it to more sophisticated operations that can involve anything ranging from active measures such as election meddling, to political lobbying, to even intelligence operations, blackmail, extortion by state actors, that sort of thing, and, of course, all forms of propaganda, disinformation and influence operations of various types. Uh, my company is called scarab rising. Another thing we have uh in common it has an egyptian root, um, a symbol of uh, reincarnation and renewal. Uh, which is a scarab, because we provide uh, in addition to security and geopolitical risk assessments, reputational management techniques, in other words, resurrecting your image after its attack or threat by your adversaries, and providing crisis management, providing crisis communications efforts.

Speaker 1:

Wow, that is quite a diverse portfolio, which I think it's also interesting. I'm like, where do I begin? I want to unweave this In reference to Scarab Rising I love the name, by the way, for our listeners. Some of you may not be aware, but Isis is not originally a bad group of people doing bad things Elsewhere. Isis is the Egyptian goddess of healing and motherhood and was worshipped worldwide for thousands of years, and so only recently did the name of ISIS get kind of like confusing for some people. But you know my goal is to try to write that wrong. So, gosh, oh, you know I love that. Big tech and information, I mean in international politics is super interesting, especially because you brought up disinformation and all of that. Can you tell us what's the current state in general of international politics and big tech?

Speaker 2:

Well, big tech are now already playing the quasi-governmental role in most of our society. They are helping shape policies, events, communications, perceptions. There's a lot of debate about what role they should play, the aspirational ideal for the role of big tech companies in today's complicated, globalized, intertwined society. But the reality is they are already there, whether anyone likes it or not, and this complex dynamic of their interactions with actual governments, their competition with governments, try to play a role that actually supersedes some of the government functions. Those are things that are already ongoing. They are not something to be feared or concerned about. They are already there. So how we manage them really depends on a lot of things, but I think part of the reason why we're not managing it very well is because we don't understand that the process has already started a long time ago.

Speaker 1:

Wow, you know it's interesting. So when we're talking big tech, we're talking the Googles and Facebooks of the world. You know it's interesting because I remember when I first heard that Google was sitting in the White House in meetings with Obama and I was really taken aback by that. I mean, on the one hand, the PR marketing professional in me was like, well, it kind of makes sense, because they controlled the information highway, particularly here in the US. But then I was like, wait a minute, that's actually kind of confusing, because don't they have a conflict of interest due to the fact that they're a global company? And is that something that would have happened 30, 40, 50 years ago, that we would have a company with interest all over the world sitting in meetings at the White House? So what do you think of that kind of? Is that the complexity that you really find interesting in that field?

Speaker 2:

Absolutely Well. There's a couple of issues here. First of all, there's an issue of compartmentalization To what extent are these companies able to separate their own interests as American companies that are forced to follow US law from their private interests as companies servicing the world? And these conflicts of interest range from things like regulatory frameworks on privacy and security settings, which are quite different even in friendly countries such as the European, like regulatory frameworks on privacy and security settings, which are quite different even in friendly countries such as the European Union. There's frequent conflicts over. Regulation in the European Union, for instance, tends to put a much higher focus on privacy, disclosures and limitation of privacy and things like that, much less other countries that are authoritarian regimes or that are failed or struggling states where there's very little social framework in general, or even the current polarized climate in the US, which is vastly diverse expectations from the responsibilities of these tech companies. So the US government is also forced to struggle with these distinctions because, on the one hand, these companies are valuable allies and counterparts and oftentimes more effective intermediaries in messaging and bringing people together and promoting businesses and generating jobs. On the other hand, if they have conflicts of interest due to where they operate, that may put US security and economic interests at risk, and we see particularly with China.

Speaker 2:

Some of these companies are deeply invested in China. Apple is one of those countries. At the same time, apple has become known as the technology that is associated with security. Their phones are frequently used by human rights activists and so forth. So how can Apple jibe its investments and involvement in China with its dedication to human rights frameworks? That's just one example.

Speaker 1:

Yeah, yeah, that's a great example. You know, I'm really fascinated right now because I'm watching, well, the AI race, which is, you know, the word. I think the words AI really throw a lot of people off. It's just like it's technology that's just gotten better. And you know technology, I think we all understand. I mean, it's obviously more fascinating because it's moving into the territory of true artificial intelligence and, you know, apple just recently made that announcement about ChatGPT and they have all these new features and in giving your field, I thought I'd ask you about this.

Speaker 1:

So most of these companies have built these new artificial intelligence large language models off of basically scraping. I mean, let's just be honest, these companies are basically scraping the talent of the world's history to build what will be surely be trillion dollar companies and the lines between countries. At that point, when you're now, you know, I don't know, it just seems like it's becoming well globalization. But what do you think of that? That whole the chat, you know the chat GPTs of the world, well, globalization. But what do you think of that whole? You know, the chat GPTs of the world, large language models, basically building themselves off of the world's talents and now moving into the future.

Speaker 2:

Well, I think there's a lot of confusion, because AI as a term is sexy, it's provocative. It's also vastly misunderstood, because artificial intelligence is not some sentient being arising from the nethers of technological infrastructure. It's an algorithm. It's a human-made algorithm that can improve, become more accurate over time, but can also become stupider over time, and it has very frequently. Actually, there are discussions about whether AI has currently hit a roadblock that cannot really improve. More seriously, though, there's a couple of things that are not noteworthy about these developments. First of all, there's a huge potential for generating a diversity of new jobs.

Speaker 2:

The biggest argument I'm hearing is the concern of the employment-minded on the labor practices. In reality, as society progresses, it diversifies. It creates more needs and wants. As society becomes more economical development, it creates more needs and jobs that it never knew it needed.

Speaker 2:

So I expect AI to play a role of that sort to actually create forms of entertainment, forms of communications, forms of creative outlet and research tools that were simply not exist 50 years ago and which will require significantly more labor force. But the key here is that labor force has to have appropriate skills, training and, of course, essentially needs to understand the role these processes can play, what it can and cannot do so, we will actually have most likely more needs for labor rather than fewer, but we do need to put a lot more resources into educating, breaking down barriers in communications and understanding how these processes can be utilized effectively. The other issue is that so far, we are facing the wild west of AI. Essentially, you have private companies experimenting wildly with these exciting new innovative technologies, and I think, on the one hand, it's hugely inspiring because they are capable of doing something that governments can only dream about doing.

Speaker 1:

At the pace they work at.

Speaker 2:

At the rocket and so forth. On the other hand, it is completely an unregulated landscape in the sense of policy frameworks, of what you can and cannot do. I'm talking about the lack of structure, the lack of concrete aims and objectives.

Speaker 1:

Lack of oversight.

Speaker 2:

Exactly when they focus innovation technologies, they focus them on specific needs, to answer specific questions in national security, in defense, in improvements of particular processes, whether internal or external. So the governments have to they're accountable to the taxpayers to some extent so they develop things that are based on specific needs. That is limiting on the one hand, but on the other hand, it creates a framework for progress that is trackable, measurable, where you can hold the developers accountable for successes or failures of answering a specific question. With ChatGPT and all these outsources, I'm sensing this general sense of excitement without any particular purpose. What exactly do the founders of these technologies want to see? What answers do they want to what questions are they aiming to answer? What tools are they hoping to achieve? Where do they want to see their products move? What skills do they want to see develop? Part of the reason why AI is not as successful as it could be is due to a lack of concrete objective vision or strategy on the part of these developers.

Speaker 1:

Not because no, I totally can agree with that, yeah.

Speaker 2:

And I think these stumbling blocks in learning, these hallucinations we are facing, they are a byproduct of this experimentation that does not have a degree of discipline that you need to make any product successful.

Speaker 1:

Yeah, and also I think it's always like this with technologies and new things. It's like they're looking to monetize it over developing a stronger short and long-term vision, and there's always that push-pull between the two. I find artificial intelligence advancements right now to be super exciting. I do think, you know, obviously they're very focused on, you know, coming up with temporary solutions given the, the technology that they have that they can monetize to feed the system and the and encourage investors. They're trying to show, I think, uh, what the long-term potential can be by at least creating what they can right now and and a lot of the tools are really handy.

Speaker 1:

I do think there's a huge disconnect from um. There's going to be a timeframe where where the jobs will disappear and a lot of the tools will, it will eat up the jobs that exist right now, because a lot of them don't require a human being, given the direction of artificial intelligence, and that there will be a timeframe there where we should really be utilizing that time to train people to, to, to understand where it's headed, what's going on with it, you know what the potential can have, and at least giving them the mindset of being able to work within AI, whether it's, you know, training them to repair robots and understand. I mean, you know, part of the struggle with this whole thing is that it's moving so quickly while at the same time so wildly that knowing what those jobs look like is actually difficult and still isn't quite fulfilled. But there has to be some in between that we can do in the meantime, that can start to develop a workforce that can be ready for it. I just find that really interesting, it I just find that really interesting.

Speaker 2:

Well, not only do I agree with you, but I want to point out to some of the problems that we already have that have nothing to do with the explosion of generative AI that we have witnessed over the past year or so.

Speaker 2:

We are talking about regular STEM jobs. We are talking about regular engineering jobs, ai chips, the semiconductors that everyone's talking about, that are inherent to an assortment of systems, military, computer technologies, anything that drives current technological progress. We don't have enough skilled people to do those jobs. We don't even have enough to complement foreign workers that we are seeking to import from Taiwan and India, because we have not articulated the demand and the benefits of such jobs to the incoming generations of students. So people are simply. There's a skills gap. Quite frankly, I am very concerned about the skills gap emerging with AI and other technologies, because if we are not able to understand the value of these jobs and outline this framework of where they're going, how are we going to introduce this as a desirable subject in schools, desirable subjects in schools? Who is going to be providing training for existing workers that are seeking to transition to these jobs as automation makes their positions disappear?

Speaker 1:

And at the pace our school system works, it's almost laughable.

Speaker 2:

Should it be the government, Should it be the? And, if so, what sort of skills basic skills are considered sufficient to really answer these questions? If it's not the government, to what extent should? Can the private sector outline such basic education for the general public, as opposed to people who are already somewhat sophisticated in technology? And given the education for the general public as opposed to people who are already somewhat sophisticated in technology, and given the conflict of interest and competition between the various companies, all of them striving to bring the best AI models to the public, can they be trusted to really create something that's useful and not just more confusing? So this is a question that's open. Yes, we all know that we need to invest more into AI-related education, but who should be doing the investing, who should be doing the educating and who should be the people targeted for education?

Speaker 1:

You know, honestly I love the idea of it being so structured that it could be a part of the school system. But but given our history and how antiquated I mean, look at the way that we're still treating our teachers here in america like we treat teachers as though they're not of value or important and and that is insanity when you think about it that really does make me believe that they. I mean and I know that there's truth to this that the school system in in America was structured to create manufacturing workers. It wasn't designed to create thinkers. And it's like can we just accept that and tear the system down and rebuild it for the future of technology? Because at the rate that we're going, it will demand that companies or people step up to help educate people, to give them the skills needed to work in those fields. I just don't see our government moving at that pace. I mean, some countries can do that, but they tend to be either smaller or they're authoritarian.

Speaker 1:

That's where I was going. I was like or they're run like you know. So, yeah, really interesting. What do you think of the legal aspects around the? Do you think there'll ever be sort of a comeuppance for the fact that most of these large language models like a chat GPT built their capabilities off of scraping the world's information? I mean, when I remember when I first started using to give you an example, when I first started using generative AI I've been around art for decades and I recognized the artists. In that first six months I kept going in and I was like I don't like generating art where I can identify where the art's being taken from. So I know for a fact that that's what was happening. Even some of the more obscure fantasy painters and things like that. I was like I recognize that person's style and I was like how do you steal the ideas of all the greatest minds in the world and you don't pay anybody any money for it?

Speaker 2:

Well, here's the thing Intellectual property has always, long before technological advancements, been a balance between the rights of the artists to profit from their ideas and the rights of the public to eventually have access to at least the concepts, the ideas, not necessarily the works themselves. That's why there are patent limitations that have to be renewed, and so the same with copyrights. That's why, after somebody's estate is ended, eventually some of these intellectual property works enter the public sector. They enter public knowledge and gain basically free access. I think something similar may eventually happen with AI. I do think it will have a different form.

Speaker 2:

I think it will be exceptionally difficult to track every obscure intellectual property on the internet, especially if it's not taken in whole, If it's merely services and inspiration for creativity. You can't honestly hold someone accountable for imitating somebody's style. So there is going to be a distinction between imitation, inspiration or, you know, a combination of certain elements and a whole basic plagiarism of an entire work or sentence and so forth. And I think this is actually very similar to the same sort of struggles that the intellectual property has already settled in some respects. So I actually don't think it's as terrible conceptual as people make it out to be, but it's more complicated due to the sheer size of available information.

Speaker 1:

Yeah, I think if they weren't putting going to and they're already doing it put entire industries out of business or cause them to be to be dramatically transformed, like look what's happening in Hollywood, it's just literally falling apart and music is getting dramatically affected. I think that's the artist in me that says, okay, if you're going to decimate entire industries, the least you could do is, like compensate the people whose stuff you know, because we do have. There's a reason why we have laws that allow people to be compensated for a specific time of use of whatever they've created or designed, and none of that's actually occurring. And it really does destroy careers and affect people's finances. And I think it's easy for the tech companies in their race to be the first and to be the dominant to just dismiss the long-term impact that has on tons of artists and musicians. And I just find that subject really interesting To switch gears here, because I could just pick your brain all day.

Speaker 1:

Your brain is amazing. Honestly, I love it. You're dynamic and intelligent and just have so many diverse like. I can just hear all the, the thoughts running through your mind. Um, a switz, a company in switzerland, to be believe it was. Has um been uh, basically scraping brain cells and using the brain cells to power AI and I know, you know I'm I'm both fascinated by cyborgs, because I know we're headed there very quickly, like it is the next level of of body modification.

Speaker 1:

I've been saying that for like a year now, and I find that really interesting and it's going to happen no matter what, but but what kind of? Yeah, what is that? You know, what does that look like from your perspective, given your, your specialties?

Speaker 2:

Well, I think right now we're at a very early stage at this of this sort of experimentation. I think Neuralink is by far more advanced in the sense of combining human and cyber sentience. You know, providing you know, in the sense of combining human and cyber-science. You know, providing you know, implanting chips that help people gain certain skills that they would not normally have.

Speaker 2:

Now, those brain cells, I'm not sure to what extent that company knows the exact function of each cell. I'm hoping that the cells they're using are the ones they are familiar with and not the way most of the brain is not even fully understood by neurologists and so forth. I think they need to be extremely careful in how they experiment with this technology. But I do think that, at the same time, having separate brain cells is no different from having separate cells of any kind. They don't. It doesn't mean they're transferring the entire human mind into some technology, just basically studying the function of the human, of the of the human cells, and looking to improve the existing processes, which were in some part modeled on neural networks to begin with. So I don't think we're going to face some sort of existential disaster as a result of this type of experimentation. I just think that this is just yet another phase in trying to understand how the human mind works and to improve the processes in technology.

Speaker 1:

Yeah, wow, you have so much experience in national security. What do you see? What do you see things going, given the current circumstances, which are diverse and dynamic?

Speaker 2:

One is something I predicted. The first thing I predicted a few months ago, and then it started happening already, almost immediately, though we're only hearing about it now which is that AI generative AI is being used in very sophisticated phishing scams, such as the advanced cloning of a boss's voice was used in one Chinese, hong Kong-based operation. That could, as AI cloning technologies improve and as video deepfakes become more advanced, I think that will become much more of a problem. The biggest problem is not the advancement of technologies. The biggest problem is the fact that human beings in corporations and elsewhere are not even prepared for the regular phishing scams and deepfakes, and we are lacking in basic training and situational awareness at all levels, including in corporate contexts, including by the very same big companies that are generating these new technologies, and that, I think, is something that still is yet to be addressed by anyone.

Speaker 2:

The question is, where will responsibility lie? With the end users that are being fooled by these technologies and by these scams? Over the companies, and they're quite generating them? Right now, the trend is still heavily focusing the burden on the end users, but it's starting to shift. There is a growing concern that big tech companies are simply not doing what they need to do to safeguard basic security prospects of their databases? What happens when the technologies or AI goes out of whack and is misused? Should people who are unaware of it be punished for that? Should every end user be responsible for researching every intellectual property violation or every deep fake out there, or should it be the company which has far more resources and has developed some of these procedures? To begin with, I think we will find that more responsibility will shift to companies over time.

Speaker 1:

Yeah, that's. The scams have gotten really sophisticated. I honestly think I can just see, at some point, people getting to know each other in their fake avatars and then more authentic and real. And is it authentic and real? Because how do you actually know? And then the others who are going to really love and play up the whole idea of living in this fantasy, fake world, um, the mental health implications are well, yeah, yeah, yeah, definitely.

Speaker 2:

Metaverse, I think, will make a comeback at some point, but on the hand, we have seen the current horrible law with the use of cell phones and teenagers being disproportionately affected and people trying to ensure that minors don't have excessive access to social media and the addictive nature of some of these algorithms.

Speaker 2:

I think these debates will continue. I don't think there's an easy way to resolve these questions and more of them. In terms of what you said about authenticity, I was just at the NATO Stratcom conference in Riga, latvia, and I visited the local marketplace of products and services, including related to AI areas and some of the language models that I have seen. There are set to completely disrupt our understanding of delivering information, including disinformation, and for now, these technologies are in the hands of the good guys with good intentions, but I think it's only a matter of time before such technologies end up in the hands of their very adversaries and we end up in a system where we can no longer trust who the messaging is from even at all, because the level of realism, the level of sophistication of these mechanisms and the level of subtlety in information warfare will be unprecedented.

Speaker 2:

So the question is how do you maintain any trust, any ability to communicate, how do you not withdraw and de-globalize when every technological weapon is likely going to be employed to create this sense of distrust?

Speaker 1:

Yeah, it's so interesting, you know I was thinking about. We talk so much about young kids spending too much time on social media and having access to you know the world's, you know stuff. But I think something that people don't understand in terms of the mental health crisis with younger people is that the human body and mind is not designed to be on all the time. Nor is it designed and I know that it can always evolve, we're always evolving but it wasn't designed to move at this pace, for it to have access to godlike powers and the world's information from basically birth. And I don't know that we've ever actually studied the burden of being born into a society that has no pre-existing generations that have those, you know, the same level of experience with that.

Speaker 1:

And these young kids are just sort of born into the world. They've got the phone, they've got the iPad, they can answer any question, they can find anything out, they can learn anything, and that's actually a huge burden. It robs them of that natural you know at least, or historically natural process of sort of stumbling and falling and learning and digging and growing and evolving at a more natural pace, and I don't think it's actually being inundated around the clock with success of other people and ideas. That can sometimes be really negative, actually a lot of negative, frankly, given media they're inundated from a childhood. I just find that really interesting and I think it's partly responsible for the mental health crisis that we see with young people.

Speaker 2:

Without a question. And, let's remember, just because you have access to theoretical information doesn't mean that you will be deliberately inquisitive enough to follow through and actually investigating every aspect of information or every question that you're asked. You know, sometimes people just automatically trust algorithms to deliver accurate information and it's simply not true. The algorithms are not always accurate, they're not going to be 100% foolproof and I think there's still responsibility lies on the human being to be able to use their mind to critically evaluate information, and that, I think, is what kids are being deprived of this evaluation ability, this assessment, this differentiation between true and false, objective facts and subjective opinions, this whole concept of your truth, my truth. That's not really helpful because it deprives people from being able to say this is what happened, based on the information that I was able to gather. This is one way of assessing this. This is this person's subjective experience and how it colors this process. They're no longer able to make those distinctions and I think this is part of the problem we are having with all these polarizing dynamics that we are witnessing.

Speaker 2:

The other issue is just because you've gained a valuable tool in information processing doesn't mean that you need to lose everything else that's out there.

Speaker 2:

Information, uh, burdens for what they are should not be the only part of life people are focused on, uh. So the question is uh, okay, great, you have access to the phone, you can talk to your friends, why don't you want to have dinner with them in person? Why don't you want to go out dancing or playing a game of sports or traveling someplace and experiencing not only the phone but the sights, sounds and so forth? So living in this one-dimensional reality is confining, and I think that more than just this negativity is part of the problem is that, yes, negativity is out there, but there are also positive experiences, experiences that require your human engagement, compassion, wisdom and direct interactions, not just information per se. So it's also valuing information, this access, over all these other things that exist in life. That's part of the problem, I think, and I think that's not being exercised enough. We are talking too much about what we limit and not enough about what we need to emphasize and how.

Speaker 1:

Yeah, no, no, no. A big part of the work that I do at the SoulTech Foundation, besides from fighting for ethics and artificial intelligence, artificial intelligence is training people from underserved communities and how to understand mind, body and soul harmony and the basics of working in AI. And this is, you know, a relatively new venture of mine, because I see a need in society and they're you know they're going to get left behind if they're not taught those types of skills. And you know they're going to get left behind if they're not taught those types of skills. And you know I'll expand from there.

Speaker 1:

But I think the younger generations need to be taught mind, body and soul harmony, which sounds preachy, but it's not at all preachy. It's actually, to me it's science. If they do not understand how to control their thoughts, how to control their heart rate, how to control their breathing, they're going to find themselves constantly living in that state of anxiety that I see. So I mean it actually takes my breath away when I get I can always tell someone's age by their skill level with engaging with people face-to-face, particularly people who are not their generation, and how much they want to stay in their room, it's like a huge, like it's a huge giveaway, because they've become so accustomed to living in a box, a cell phone or a computer or whatever, and I just think one of the best ways we can help them is to teach them to get in their body completely completely.

Speaker 2:

I think, actually, what you're telling me reminds me of Japan and essentially how they're basic, how they integrate this sort of harmony into every aspect of the holistic teaching from understanding nutrition and the value of nutrition, essentially basically focusing on these elements and not just you know, oh, here, eat some food, but talking about it and talking about its value and presenting it. Presentation is very important, almost ritualistically so, but also giving responsibility at an early age, encouraging this sort of kind of engagement with the society at an early age and encouraging this sort of going out into the world, even on your own, even if you make mistakes, and not simply blocking yourself away from all human pain, experience and fear through these safe, anonymized networks. I think there's a lot to be learned from that.

Speaker 1:

Yeah, yeah, it's interesting because I was thinking that the cultures that America can definitely learn a lot from India and Asian cultures, where this type of conversation is actually part of their culture. It's not segmented off and pigeonholed into religion, which is such a weird. You know, I see so many people do that all the time. When you start talking about meditation or yoga, they think you're talking about religion. It's like no, I'm just talking about taking care of myself. So, gosh, you know, it's so fascinating to speak to you. Like I could literally be sitting here looking at the clock like, oh darn, our time is up, Uh, but I could just keep, uh, keep having conversation with you. So interesting, Uh, if people wanted to learn more about your work, your business, uh, where would they go?

Speaker 2:

Well, uh, there's. I have a few websites. First of all, scarab Rising uh is one of is one of them. I have also something called the washington outsider, where I give this is a project of scarab rising, where I give voices to different people from around the world, often overlooked. Their perspectives are not always uh, don't always make it into mainstream media. That doesn't mean they don't have value. There's the washington outsider report on the coalition radio.

Speaker 2:

That's my podcast, video podcast where I interview in depth for an hour to an hour and a half people from around the world americans, you know, uh, people from everywhere africa, asia, latin america, you name it. A lot of it is on global and security issues, but not only. Sometimes it's about economics and logistics and book talks, things of that nature. And of course, there is also the Substack where I gather my media appearances, the Washington Outsider Substack account. That's where I share my weekly media appearances. So if people want to see where I've talked, what I've written, that's a handy way of gaining access to all that. But I also put out my work on LinkedIn, so anyone wishing to connect to me professionally can connect to me on LinkedIn, and I also post all my work there as well.

Speaker 1:

Amazing, amazing. Well, thank you so much for joining us today. Irina Zuckerman, you were fascinating to speak to you guys. If you're not already subscribed to Citizen Journalist, you know I'm thinking you need help. Come on, I mean intelligent listening. It doesn't get much better than this. So please subscribe and let's see. Before I go I have to, I have to mention a few things. Thank you so much to everybody who ordered Memory Mansion. As most of you may know, it hit number one in its category, in the top five in all three of its categories, so it's a bestseller, and my new book, a New American Dream, comes out September 7th. And thank you so much to everybody who's already supported that. You guys made it a number one in social theory, which is amazing, and I really appreciate it.

Speaker 1:

If you're in the New York area, in New York City, at the Opera Center, on July 2nd at 6 o'clock to 7.30, I'm doing an event for A New American Dream, to talk about how to use technology to reinvent the American dream, and I would love for you to join me. Visit shamanicistcom and you can RSVP there. A New American Dream you can also find that on Eventbrite. Would love to have you there. It's going to be a fascinating conversation with myself and Weta Duncan talking about how to utilize technology to level up. So if you want to learn more about me, go check out my websites soultechfoundationorg and shamanisiscom, and you guys have an amazing week. I'll be back soon enough with more from Citizen Journalist. Thank you, irina, have a great day.

Speaker 2:

Thank you so much you too.

Speaker 1:

Bye you guys.

People on this episode

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

Billy Dees Podcast Artwork

Billy Dees Podcast

Perfect Media Productions, LLC