Spiral Of Implications: Ripple Effect
So now we're gonna talk about ripple effects, or also known as the spiral of implications. So, we've been jumping all over time. I asked you guys to think about 10 years in the past, 10 years in the future, and beyond. So, imagine the ripples in a pond. You've dropped a stone into a pond, and that stone is some sort of event. It's some sort of triggering event in a story. It could be that moment where the robot takes over my company. It could be something else. It could be about an ice cap melting. It can be about some other triggering event or phenomenon. This is now about what's next, what happens next. So there's a future scenario, and then from that scenario, there's different implications. There's social implications, cultural implications, political implications, ethical implications, all sorts of implications from that one scenario. Right? And so, just because one thing happens, it doesn't guarantee that something else will happen. It's not a simple if this, then that. It's if s...
omething happens, if a robot takes over my company, scenario A could happen, scenario B could happen, scenario C could happen. All sorts of things could happen, and we can also think about these through different concerns or lenses, whether we're thinking about the social implications or the business implications. We should take into account a lot of different implications to help us think about what is implied by some future. And then from there, we can make judgements about this. Is this a preferable future? And then work backwards to figure out what choices we have to make today to make that future happen, or to prevent that future from happening. Now, this is best done as a group. And so, I asked some students here in the audience in Seattle to come up and help me. So, we'll do some practice with this. We'll go through a few different scenarios, put it in the middle of the board here, and then we'll map out these different implications. And this is where tapping into that creative intelligence is really helpful, because people come from different perspective, and they might think of different best case and worst case scenarios that one person won't think of. So, the more, the merrier, so I'm gonna ask Lauren, Cindy, and Paul to join us up here, and we'll start with the first scenario. All right. How's it going?
Hi, it's going well.
Thanks for coming. Thanks for volunteering. Awesome. Thanks guys. So, we only have three markers, but we can share. I'll give you guys that. So, we'll start with my scenario, and then we'll have time probably for some other scenario. So, you can do this anyway. Really, the visualization is not important, but so we'll start with this is the robot taking over my company scenario. So it's the robot COO. All right. So now we're looking sort of forward in time. We just jumped to this future. For this exercise, it doesn't matter how this happened. We'll deal with that later. So, regardless of the plausibility of this happening, we'll just assume this is a fact. And then let's just go from here, like what is an implication of this. So what might happen the day after this? What's an accident that could happen? What's the sort of positive thing that can happen, and let's just map it out, and feel free to just draw and diagram this with me. So you can come a little closer.
Just start writing right away?
We can just talk about it too.
Hiring and firing won't be based on human input. It would be based on set algorithms.
Let's just call it algorithmic HR. We know it's about sort of hiring and firing, right? From there we can talk about potential risk and rewards. When we're thinking about these ripple effects, you can often fork off into different perspectives and give them values after it's read. So there's like one positive could be efficiency, but then a negative could be discrimination. If something discriminatory sort of baked into algorithm, the training data given has been discriminatory or skewed in some way, this algorithmic HR thing could be a nightmare for some people. These pluses or minuses here, and then you can kind of go from there and to be more specific. But it's helpful for this to start with kind of breadth of implications, and then we can kind of chase these down into the specifics in the time that we have.
I was gonna say may be like the activation energy you have to overcome to approach a robot CEO of like how to be mentored, or how to navigate your career.
Mentorship with the lack there of, right? Because it's the like the robot CEO was probably trained with data in some way, but they don't have that kind of personal struggle or emotional journey of how they got there. You lose that in a way, right? And so you can think about those in trade offs in terms of the efficiency maybe of this, but you lose the mentorship. And you can also think about, we wanted to pursue this in some way. You can think about it in terms of a story. So maybe there's some employee who's algorithmically hired who wants mentorship or tries to get mentorship from the robot COO, but just doesn't get something satisfactory because there's that culture clash between human and machine now. And that's another kind of spiral implication of culture clash.
I think the power dynamics, the political implications of now your boss isn't someone who has more experience or makes more money than you. Your boss is a robot. Maybe there's a change in how you respect the leadership or how you interact with them.
There's this whole culture power respect. I think it's related to some of these things too, of like the cold algorithmic HR, and it's also interesting. I mean we're just doing this thought experiment right now, but you can also look at some weak signals that are recently possible. In some industry, particularly like retail or fast food, there's already algorithmic scheduling. So the boss might not be a robot, but there is an algorithm that's trying to optimize and telling you when to go to work for example. That's shifting the culture in some ways.
Another impact is, if it's a robot, it can work 24/7 during office hours, on holidays, on the weekend. So, how does that impact the workload of everyone else who have to report under that CEO? Are you expected to be available just like a robot?
It has an effect in compensation I guess. Is the old CEO salary redistributed to the employees?
Or just how it cascades in hiring as well. If robot COO does really well, does that mean other positions would be up to becoming automated?
Yeah. There's this amplifying effects of more automation.
Or the pushback too. Our people sabotage, they don't wanna lose their jobs to robots. So they're gonna sabotage the robot COO. (laughs)
Right. What if somebody pulls the plug? And so we've talked about that before, like the push back effect, so the pendulum swings. And we can also look back in history. If you think about the Luddites. That was a movement of folks who were reacting against the industrial revolution in Britain. So, they were deliberately sabotaging the looms, the mechanized looms that are replacing manual workers. What is the equivalent here in the AI age. I think there's just these sort of legal things too. We can look at, say, autopilot on commercial airlines for inspirations of like most flights that we take are run on autopilot. The humans are there for back up and for safety for the most part, and really more for our psychological safety most of the times when there isn't an emergency. But those are there for the legality of like if the machine fails, the human's performance is kind of the stop gap there. So, what if we applied that here to the robot COO scenario? So you see how broad based, a broad knowledge base of history and culture, and say like legal implications, economics helps us understand this and build out a world. We don't have to touch upon all of these things. This is just the planning for our story world. And so we could build a story around, like we mentioned already, somebody wants mentorship and they can't get it, or there's some other vignette about like the sabotaging the robot COO. And all of these can be very short stories that bring this world to life. This is just kind of a chart of words, but I think we can react and respond more emotionally to it when it turns into a story. Are there any sort of implications or potential stories that you see here that you might pitch?
I mean I guess one of the things a COO does is also network with other COOs. So is the robot COO talking to other robot COOs? Are they conspiring together or not? Are they competing?
Totally. We've seen some interesting scenarios like this already that we might be able to play with from the recently possible of our weak signals where people get their Alexas or their Google Home things to talk to each other. Since I use an AI service for my scheduler, for my scheduling, sometimes my AI scheduler is talking to my client's AI scheduler. One time, it's actually created some anxiety when my robot scheduler was talking to my client's human assistant, and then it was revealed that she was talking to AI who was coming for her job. Yeah.
Questions about transparency. How much of the decision making is transparent? Because you could talk to your boss and be like, "Why did you make that decision?" Whereas algorithms are kind of, it's a black box. It's not always clear how they came to that decision.
Totally. Alright, so let's get some more feedback maybe from the room to see... Do you guys see any other possible implications or sort of stories of what could happen here?
If the optimization algorithm for HR is based on pure numbers, it has the very unfortunate potential for undermining women's rights because a woman getting pregnant would suddenly lead to a lot of time off, which if the algorithm isn't smart enough to account for that, it would lead to her getting fired and paternity leave would cease to exist.
Right. So yeah, that's really an important one, thinking about the gender implications of something like that too. That's important in our story too, in terms of what characters do we introduce. We have the robot COO character. For the employees, is it a woman? Are we thinking about gender or racial ethnic discrimination that might happen because of this?
It would automate a lot of KPI identification. You already know right away what's working, what's not, there won't be if and buts about it. It's just like this is either working or it's not.
Yeah, and maybe it's also about a realignment. It's good to not just focus on the negatives. I'm really glad that you've found some positives here. What are the opportunities in terms of a realignment in terms of this division of labor between humans and machines based on that. We lose the mentorship angle, but maybe some of the metrics become easier.
What are the infrastructure needs of said robot COO? Is it hosted somewhere? Do you need a certain amount for the bandwidth?
Yeah. Infrastructure needs, right. I recently read about how all of the bit coin transaction and block chain is requiring so much electricity, like more than Ireland, and more than a lot of small countries. It's a huge power drain for this stuff. Yeah.
It's pretty unlikely that an algorithm-driven HR would account for race either. So racial discrimination might actually cease to exist.
So you could think about best case and worst case scenarios where maybe it gets really discriminatory, or maybe it gets rid of discrimination, but then we also have to think about how did we design it, how did we get there so that, that happens. And that's another thing that we'll practice in another activity. Yeah.
So management style and leadership style would definitely be different. People working under the robot might feel like they have Big Brother watching them all the time.
Yeah. So it goes back to that culture and that management thing, and so maybe that's another story or vignette that we could tell about somebody feeling like they're always being tracked. In a lot of cases, they are already being tracked in some of these companies where people are being monitored.
It'd be a lot more dynamic in terms of decision making. I think there's a lot of biases for people to kind of, once they made a decision, to stick with it, but the ability to kind of react quickly to market changes.
And we can maybe also look at, say, like the world of finances and like automatic trading and how that works, and so think about that kind of decision making as well. So, you can see here we, just with one triggering event, one spark of a moment of a story, we started building out this world, and there's all sorts of sub stories that we can tell here. And this works best, like we've seen here, tapping into the collective intelligence of a group. So let's try this with one of your stories, whether it's one of your stories or somebody from the room here, and we can just practice this one more time. Do you guys have a story that you wanna use a scenario, or in the rest of your classmates?
This is the future story?
Yeah, it could be the 10 years in the future story, or it could be some other event that you've just come up with now. So it doesn't have to be sort of cataclysmic or anything. We could also look at the driving forces, like say there is really fast climate change or there's... I think maybe we have one about remote work too.
Ubiquitous VR. Ubiquitous VR, so VRs everywhere.
Awesome. Cool, let's do that. All right. So, implications spiraling out from VR everywhere. I mean I think starting with some of the potential negatives is thinking about just like social isolation. What do you guys think about this one? Maybe even like addiction. You can kind of think about this as like the spiral downward or the ramp upwards.
The positive would be there's no boundaries. You're not stuck in your small town. You can be anywhere in VR. You can be anywhere and anyone I guess.
We could also...
Go for it.
I was gonna say the counter to socialization. It could be network to other people. So you could actually be looking at the same thing as your friend across the world. You could share experiences at the same time.
These could be in contrast to each other.
To counterpoint the no boundaries aspect of it, I think you would create specific experiences, and so you would have to educate a ton of people so that they have the power or at least the tools to create experiences that wouldn't be targeted for like a specific audience for people who already know how to do VR.
So it's more like micro targeting. What was the sort of...
I guess a good example that I can use is like, I feel like VR today, like most of it is like first person shooters or zombie games, which don't appeal to me, but I don't know how to do VR. So none of the everywhere experiences apply to me.
So is it more like inclusive content?
Yeah, there you go.
And we can also think about this as like, is this an implication, or is this a prerequisite, or is this an indicator? For right now we're just putting everything on the board, which is how we start.
But as we move forward, it is helpful to think about this where it's like, okay, if VR everywhere is the one boom in this future story, then if VR is everywhere, the content might be more inclusive, or maybe the content needed to be more inclusive before we got to VR everywhere. We can think about the vector, the trajectory of this vector is either or, and think about them, but that's just something to think about. It's not wrong. It's just let's get everything on the board and we can always think about that. As we start building out our toolkit, it's helpful to think about indicators as well. So, the indicators are similar to these weak signals, but there are things that are telling you if it's gonna go this way, or it might start going that way.
I was gonna say maybe it leads to class inequality, people who have access to VR and people who don't. Think about today, people who have access to smartphones versus people who only have dumb phones, or people who have internet access at home and people who don't, and how much that makes life difficult for people who don't have access.
That's attention too. Does it exacerbate this digital divide stuff because everyone has to buy this VR gear, and if you can't afford it, you're left out? Or is it part of the everywhere thing like a radical affordability and it's just like you can't, maybe you can't travel but at least you can get the next best thing, and you can visit the pyramids in Egypt in almost the same way.
I think accessibility is another issue. How do you you create VR for people that are sight impaired, because we've done that with computing now, but the next level, how are you gonna do that in VR?
Right. And then from there, even thinking about like legislative implications. So if you think about broadcast TV and rules about having to have closed captioning to make it more inclusive. At what point does VR, as a medium, get regulated in a way that requires this accessibility thing? Or is it kind of a cultural norm thing and then who's included, who's excluded? We just started to surface some of these questions. Yeah.
On the digital divide thing. One of the reasons that VR is becoming so popular now is that it actually quashes the digital divide, because of the fact that it's the one medium that everybody in the world has access to, because a lot of places where there is no infrastructure at all, the only communication mechanism people have is smart phones. So it actually has expanded the reach of media and are drastically already. Ad I suspect that, that will only continue because the investment required to experience VR is very small.
Sure. You could go both ways on this, the plus or the minus, and then come up with different stories for either of those. Yeah.
Two I guess. One is it creates new jobs, whether it's in VR or building VR. The second one is getting a lot of latter as a result.
Yeah. Yeah, we'll get HD and then go back to Jose.
Physiological impacts. Disruptor (mumbles), having light in your face all the time.
On that line, kind of, the need of physical hardware to actually be able to participate in that VR, and it gets split into positives and negatives. Like positives, you could make it yours, like accessories it, make it some part of your identity. But on the negative, it's also another piece of hardware that needs to be designed, produced, could be expensive and all that.
Yeah, and I think hardware broadly defined could be both an implication like VR everywhere could mean that the hardware becomes ubiquitous or the cost goes down in a way that smart phone cost have gone down. Or it's an antecedent like the hardware has to get a certain point of minimizing physiological effects before it becomes main stream. Yeah, we get a couple more, so you.
I'm sorry, I'm like hogging this.
Laws, right? And it's a totally different way to catfish people, which would be weird.
There could be mental issues at stake where if you have mixed realities and you're plugged in all the time, you could start to really not recognize or give attention to the difference between a virtual world and the real world, and then also the physical aspects of that. You're not working around as much, you're not exercising as much, and your health is going downhill.
Totally, there's mental and physical health. You have some of the physiological stuff here. So, we've started to create a world around this too. And so, what's next from something like that, where you can choose any of these, and then go deep into them, creating a scenario that explores the legal implications. And then when you do this too, it helps you understand what sort of experts do you need to bring in. So, maybe you're a VR company and you're thinking about this, but maybe this surfaces the need to talk to an economist, to talk to mental and health professionals, doctors, lawyers. It brings this all out in terms of these different scenarios to help you understand that context.