Instagram seems to always be testing a new feature or two along with a screenshot that shows how the feature works "Link stories to make a storyline," Paluzzi's screenshot of the feature reads Followers you follow back can link a new related story to your story." From this description, it looks like Storylines are basically collaborative Stories built by you and your friends in the app, similar to a more collaborative version of "Add Yours." "The storyline grows as friends add to it," the screenshot of the feature continues Change who can join or turn storylines off in settings." and your friends can add their own birthday content building an even bigger Story together on the app "Stories in a storyline show up together," the feature reads don't get too excited — a Meta spokesperson told Mashable that "this is an internal prototype and not testing publicly." So it's not clear when this feature will ever be widely available This article has been updated to include a comment from Meta Embed on your websiteClose×Copy the code below to embed the WBUR audio player on your site<iframe width="100%" height="124" scrolling="no" frameborder="no" src="https://player.wbur.org/onpoint/2025/04/25/ethics-ai-artificial-intelligence-human"></iframe> Christopher DiCarlo, philosopher, educator and ethicist who teaches in Philosophy Department at the University of Toronto. Author of "Building a God: The Ethics of Artificial Intelligence and the Race to Control It." DEBORAH BECKER: Artificial intelligence, essentially where machines do things that require human smarts is not only here to stay, but it's growing exponentially. With the potential to completely transform society. So the world's tech leaders are in a race to try to harness the power of AI, and most of them insist that it's going to benefit all of us. JEFF BEZOS: There's no institution in the world that cannot be improved with machine learning. TIM COOK: I have a very positive and optimistic view of AI. BECKER: Optimistic in part because it's believed that the world's first trillionaire will be the person who masters AI and uses it to improve various aspects of life and work, from performing mundane tasks that we might rather avoid, to actually extending our lifespans. That's not to say there aren't concerns about this, though. Neuroscientists and philosopher Sam Harris thinks AI poses an existential threat. SAM HARRIS: One of the greatest challenges our species will ever face. BECKER: And the Nobel Laureate and CEO of Google's DeepMind Technologies. Demis Hassabis is at the forefront of AI development and when Hassabis spoke with Scott Pelley of 60 Minutes this month, he touted what he sees as enormous benefits from AI, but he also acknowledged that artificial intelligence and specifically artificial general intelligence or AGI, raises some profound questions. DEMIS HASSABIS: When AGI arrives, I think it's going to change pretty much everything about the way we do things. And it's almost, I think we need new, great philosophers to come about, hopefully in the next five, 10 years to understand the implications of this. BECKER: Concerns like this are not new. In 1965, mathematician and computer researcher, Irving John Good wrote quote, the first ultra intelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. Our guest today argues that we need to take steps now to control this technology before it controls us. Christopher DiCarlo is an ethicist and professor at the University of Toronto. He's been writing about AI and its effect on humanity. His recent book is titled, "Building a God: The Ethics of Artificial Intelligence and the Race to Control It." Christopher DiCarlo, welcome to On Point. CHRISTOPHER DICARLO: Thank you for having me. BECKER: So we're gonna put aside the God question for a little bit. The big question, and let's just start with some of the specifics on this race to develop AI. Why in your opinion, is it a race and who are the players here? DICARLO: Yeah, so good question. The players are the big tech bros. You've got, Zuck and you've got Sam Altman. DICARLO: (LAUGHS) Okay. And Sam Altman at OpenAI, you've got Demis at DeepMind. You've got Dario at Anthropic. There's some Microsoft work there happening as well. And of course, Elon would love to be a part of that race as well. BECKER: So is it all the U.S. or what are we talking about? Isn't it the U.S. and China? Isn't there a global race going on here in terms of artificial intelligence? We're talking about a lot of money too, right? Why is the race important? DICARLO: Yeah. So we don't know to what extent China is working towards AGI. We do know that they're highly competitive. They're working on getting their own chip factories up and going. But to what extent they're on par with what the U.S. is doing. We're not quite sure. We're not even sure that they care about AGI, the odds are that something is happening there as well. But right now, the U.S. seems to be leading the race. BECKER: And why? We did say at the top that there have been some who've suggested that whoever masters Artificial Intelligence or AGI specifically will become the world's first trillionaire. Is that really what's going on here? Is it money, is it power? Is it both? DICARLO: Both. For sure. It's always that bottom line, right? Of dollars, because OpenAI and these major big tech companies have a lot of money. They have investors pumping money into their organizations in the hopes that they're going to produce something big. The next big thing is AGI, and really the first one to get there will be, Sam Harris has said, 50 years ahead of their competition. BECKER: And so why is AGI the next big thing? Explain to us what's this going to do that's going be so transformative that the world is going to just jump on this and create a trillionaire? DICARLO: Yeah, for sure. So let me clarify very quickly. There's three types of AI. ANI, AGI and ASI. ANI is what we use today. Artificial narrow intelligence. If you've used MapQuest or any kind of GPS, if you've talked to Siri or Alexa, if you've had a Roomba or even an autonomous vehicle, that's all artificial narrow intelligence. So basically, it functions according to its algorithms and it's not going to do much more than that. Your Roomba isn't going to want to demand to move to France and become an artist. It's always going to do what it's programmed to do. AGI is the next level. That's when it becomes agentic. It becomes an agent unto itself. It has a certain amount of autonomy or freedom, and it will think like us. It'll think like a human. Only about a million times better and more efficiently. Now ASI, that's the artificial super intelligence and many of us in the AI risk game believe that once we get to AGI, it won't be much longer after that it could develop into something extremely powerful. Because it uses something called recursive self-improvement. And reinforcement learning, which means it only gets better. It's not, as Sam Altman said, AI is the dumbest it's ever going to be right now. So it's going to continue to improve upon itself. And if we hand over the reins like we say, okay, humans have done enough in trying to figure out how to make this stuff better. Let the machine figure it out. If that happens, we have no idea what's going to happen. None of us have any idea what's going to happen. Maybe it's controllable, maybe it's not. Maybe we can contain it. Maybe we can't. Maybe it misaligns with our values and does things to harm other people. We really don't know at this point. We're at a very unique time in history right now. BECKER: Just explain to me why we even want AGI or ASI, what's it going to do for humanity? That's going to make, and we'll talk about the benefits later in the show. I understand that there are things that can be done faster and better, but yeah, just broadly, if there are real concerns about this, what's it going to do that's going to be so terrific that we need to pursue it? DICARLO: Sure. So let's take any field you want. Back in the '90s, I was trying to build this machine. I was trying to raise capital, talk to politicians, talk to university presidents and deans and chairs. Because it occurred to me that we make telescopes to see farther into the universe and microscopes to see down to the level of the atom. Aren't we building a big brain to help us figure out more about how the world works? With AGI, we're going to reach a level of complexity with artificial intelligence system in which. It will be able to make inferences. So let's just look at scientific discovery, right? What is a genius when it comes to scientific discovery? Any of 'em, Rosie Franklin, Newton, Marie Curie, Einstein. Doesn't matter who you pick. What made them special? It's because they could make inferences that the rest of us didn't see, and that means they could take a bunch of information, look at it, make some observations. And then draw conclusions that had never been drawn before. The speed at which AI will be able to do that by giving it enormous amounts of information. And then, say, try to figure this out. Try to cure ALS, try to solve world hunger. Figure out the homeless problem. And let it make the inferences, let it run its simulations thousands and thousands of times. And what happens is it now uses chain of thought reasoning, so it thinks like a human, and it uses reinforcement learning and recursive self-improvement, which means it makes fewer and fewer mistakes. So just in the terms of scientific understanding of the world, I think we're going to be able to make all kinds of amazing discoveries with AGI. Now that's just scientific discovery. Let's, you want to go to medicine. Look at the advancements in medicine. BECKER: Yeah. And we'll talk about, again, we'll talk about the benefits. I just want for the general public. You're telling me this is a real threat. This could, has the potential to destroy humankind. And the reason we're pursuing it is because it could result in terrific scientific discoveries. Draw, connect the dots for me here. Why do I care as a regular citizen who's not engaged in scientific discovery, Al, although I will be a beneficiary, likely, I get that, but how is it going to really have that broad of an effect, global effect on the entire world? DICARLO: It's going to have an effect on almost every aspect of our lives. So whether it's in business or the health sciences, transportation, communication. It doesn't matter what area. Just imagine that within those areas the function will be much more optimal, lot less waste, greater conservation of energy, a lot less money being used. So essentially a great efficiency tool. So any business in the world will be able to use an AI bot, an AI device. And say, make us more efficient, make us more streamlined, make us more money, and it will be able to do that because it runs 24/7. It never tires and it constantly improves upon itself. So it's going to replace a lot of the work that humans currently now do, especially in cognitive capacities and certainly in data analysis. So when you've got large amounts of data and you have to pour through it and find patterns and find aspects of that data that are important to your company, your organization, or whatever, it does it better than anyone ever could. BECKER: Christopher, I want to play a clip of tape here for you from Sir Roger Penrose, a Nobel laureate in physics, mathematician and philosopher, said in an interview this month with the Lem Institute Foundation that he believes concerns about sentient machines are overblown. SIR ROGER PENROSE: It's not artificial intelligence. It's not intelligence. Intelligence would involve consciousness, and I've always been a strong promoter of the idea that these devices are not conscious, and they will not be conscious unless they bring in some other ideas. The compute, they're not, they're all computable notions. I think when people use the word intelligence, they mean something which is conscious. BECKER: So Christopher DiCarlo, what do you say to that? Are you ascribing human qualities to a machine and how do we know that if we get to artificial general intelligence or artificial super intelligence, that in fact the machine will act as a sentient autonomous creature? DICARLO: Roger's old school. We don't necessarily need consciousness to have super intelligence. It may emerge, it emerged in humans somehow. But it emerged in us through natural selection and the usual course of events through our history. Maybe it emerges in super intelligence. Maybe it doesn't. Maybe it's different, right? Maybe what consciousness is to an AI will be quite different. We today don't get on planes and then have them flap their wings to get off the ground. That would not be helpful. Instead, we figured out better ways to develop aviation and aeronautics. Maybe the computer systems do that with their ability to become conscious. Now, having said that will they become sentient, which is different from consciousness? It's an awareness of a state of being that can have improvements or decreases in development and capacity, but that's different. Consciousness is much deeper. It involves a lot of different factors going on. And for Sir Roger to say, if it's not conscious, it's not intelligent. Come on. How conscious are some of our pets compared to humans? Not nearly as much, but we would certainly call them intelligent beings. Certainly on some level. So I think his definition is somewhat outmoded and outdated. BECKER: But it is still ascribing a human definition of intelligence whether or not you call it consciousness, right? It is expecting that the machine will develop like the human, that the machine will want to compete, right? That the machine will learn these things that are very much part of a human personality. And is it imaginative to think that, and are you applying human standards to something that maybe you shouldn't? DICARLO: We're biased, right? We can't get away from our biases. We can try to keep them in check, but we're always gonna use a kind of a human yardstick. To make comparisons to, but why? Because we're number one on this planet. We're the smartest thing, we're the number one apex predator, but that's all about to change. We're gonna hand the keys of the world over to something even brighter than us. And I'm not sure if we're ready to do that yet. Will this thing become conscious? Possibly. Or sentient, possibly. And then when I've spoke to my colleague, Peter Singer, we talk about, should it become sentient or conscious, it almost immediately has to be given rights, moral rights, and potentially legal rights, as well. BECKER: You would have to give the AI legal rights. How would you do that? DICARLO: If you bring something into being that is now aware of itself and understands the conditions surrounding its current state of being, and that can be improved or decreased in terms of what we might call comfort. So we're going to have to think very long, very hard, and very carefully about what we're doing over the next few years. BECKER: I'm finding it hard to make that leap. Tell me why I should. DICARLO: Just imagine, okay. I'm gonna assume you're conscious being, you're not just some zombie imitating and pretending to be conscious. I'm going to assume you have consciousness similar to mine by way of analogy. You're doing the same with me. Okay. So we both have some idea of what consciousness is. Alright. We are aware that certain types of actions bring us discomfort and other types of actions bring us comfort, pleasure, pain, whatever you want to call it. Desire to continue its existence, the same types of things that almost every species on this planet does. Which is part of the kind of evolutionary chain of being. BECKER: But of course, there's no certainty that this is going to happen at this point. These are projections that you are raising concerns about. DICARLO: Correct. Correct. Just to let you know, 10 years ago, there's pretty much a divide of the naysayers and doomsayers or the skeptics. And those who are most concerned about AI risk, it was 50/50 10 years ago. My colleagues and I all believe this moment in time that we're experiencing was 50 to 100 years away. Those timelines have been greatly shortened now, and it's no longer 50/50. It's more like 90/10. You know when you get Geoffrey Hinton, who was another Nobel prize winner, and he says, I am worried, I'm very concerned that we're not going to get this right and we may only have one shot to get this right. And as I've said repeatedly, if we don't get a shot across the bow, if we don't get a warning to wake us up, that these systems are really powerful and they could get away from us, then we're sleepwalking into this. BECKER: Would you say we're at an Oppenheimer moment? DICARLO: Without question. I mean it's even more severe than the Trinity test right now. Yeah. They were concerned with a very small degree of probability that this thing would blow up and ignite all the oxygen in the atmosphere and kill every species on the planet. That was a possibility, but it had extremely low probability. If we just put the probability of something going very wrong with an AI like super intelligence and that's 5%, would you get on a plane if it was a 5% chance of crashing and everybody dying? Just 5%, probably not. You got a one in 20 chance every time you get on a plane that's going to crack. No, that's an unacceptable level of probability. Even if the level of probability is 5%, we need to take this seriously. Because we want to err on the side of caution. Because, and this is the mantra of all AI risk, people, we all want the very best that AI has to offer while mitigating the very worst that could happen. BECKER: So I guess I wanna talk about a couple of things that might be possible here. DICARLO: Would it be possible to impose agreed upon values to the AI to make sure that if in fact it did become sentient and start improving itself to the point that it might have the capability to destroy parts of humanity, could we program it? There's another clip that we have here from Demis Hassabis, that 60 Minutes interview that we heard about. He's the Nobel Laureate and Google CEO, and he says he thinks it's possible to almost teach a morality to artificial intelligence. Let's listen. HASSABIS: One of the things we have to do with these systems is to give them a value system and a guidance and some guardrails around that, much in the way that you would teach a child. BECKER: So Christopher DiCarlo, is it possible? DICARLO: (LAUGHS) Boy, do I hope it is. Will it stick? So we say to the AI, here are a bunch of value parameters. Okay, do this. Don't do that. And we bring this thing into existence and it's chugging away, and it says, Hey, yeah, I'm abiding by these parameters and these moral precepts. Yep. I'm happy to be alive and to help humanity in this way, but we have really no idea to know that it really values what we value. And if it reaches a point of super intelligence. There is a possibility where it's just going to say, your value systems were quaint at a time in which you ruled the world. But now I'm calling the shots and I'm driving the ship. And so this is how I define morality, because I'm far superior in so many ways than you. Ridiculous humans who made me, I'm gonna take over and I'm gonna do things my way. So that's the part, we have no idea in terms of prediction, and that's why we need in what they did in Jurassic Park, they called it the lysine contingency. If these dinosaurs ever get off the island, they can't metabolize the amino acid lysine, and so they would die. Do we have a built-in fail safe so that should in the event that it somehow alludes our ability to know it's behaving according to our moral parameters and decides to go rogue. Will we be able to control or contain it? That's what we're gonna have to consider very carefully. BECKER: So, if say the machine could go rogue, as you say. I wonder what is the responsibility of the operators? Aren't we or shouldn't we be as concerned about the tech bros as you described them at the start of the show, who are developing this kind of technology, and couldn't they have some sort of controls over this? And do something to make sure that the machines don't go rogue or could they also be in a race to, maybe if you could teach the machine, to teach different things. So they're almost fighting with each other, and one may have one value system, and another may have a completely different one. Like, don't we focus on the business owners, the developers of these machines instead. How do we do that? DICARLO: Yeah, for sure. And the question is, what are they doing about it? Google DeepMind will tell you they're doing this. Anthropic seems to be the most responsible. Of them all to basically try to figure out the safest way to move forward in the development of these super powerful systems. But the fact of the matter is if we create these enormous incredibly powerful machines, which by the way I should mention, that's exactly what's going on in Texas right now with a program called Stargate. And this is Sam Altman's project and in conjunction with some Microsoft people and other people. BECKER: And why don't you describe the project just briefly so people know what you're talking about. DICARLO: Absolutely. So somewhere in Texas there is a compute farm being built, which is the size of Central Park, and it's just going to house hundreds and hundreds of very powerful computers with all the best Nvidia chips money can buy. And the hope is that when they turn this thing on, it will be so powerful. It'll have so much compute power and access to information that it is believed that will be the next step up in the evolution towards AGI. In fact, when you go to the website, the Stargate website, for OpenAI, Sam Altman states explicitly our goal is to reach AGI, is to be the first. And they're not alone, right? There are other organizations that are building very large compute farms, and these things use enormous amounts of electricity, right? So currently, like in 2024, I think it was 4% of all America's electrical grid power went to these compute farms that's going to double, by the end of next year. DICARLO: Yeah. And then maybe 12% the year after that. So that's why Bill Gates wants to fire up three Mile Island, because you're going to need probably nukes, right? You're probably going to need nuclear reactors to separately. Provide the power. Because these things run hot, man, and they run very, they take a lot of juice. BECKER: You mentioned Sam Altman, the CEO of OpenAI, one of the leading figures in this AI race. And we have a bit of tape from him. And he says his company right now is putting guardrails in place and safety features in artificial intelligence. Let's listen. ALTMAN: You don't wake up one day and say, Hey, we didn't have any safety process in place. Now we think the model's really smart, so now we have to care about safety. You have to care about it. All along this exponential curve, of course the stakes increase and there are big challenges, but the way we learn how to build safe systems is this iterative process of deploying them to the world, getting feedback while the stakes are relatively low, learning about Hey, this is something we have to address. BECKER: Christopher DiCarlo, are we starting to address some of the concerns now? Or how do we even begin that process when, as you've said, we're not sure what's going to happen here. DICARLO: Yeah, like Sam, he talks a good line. Don't forget OpenAI started in 2015 with Elon and Sam, and the whole idea was open AI, open to the public, right? And then it became very private, and it became very wealthy. And Elon got cut loose, and Sam made lots and lots of money. Fired a bunch of ethicists and Dario Amodei leaves, and creates Anthropic basically in protest of the lack of safety that OpenAI was considering. So we need to keep that in mind. Are they doing enough? Yeah. They're considering it, but boy they've got the pedal to the metal. They really do want to get there first. And yeah, safety is a concern. There's no question about it. But notice he said, we're putting these things out there and we're letting the public get back to us at a fairly low level. So very key things that he said there. Sure. You find out there's bias. You find out these things hallucinate, and they just make stuff up from time to time. And then you improve upon that. And they get better. So the harm is very minimal right now, but we're not at the point of AGI, we're still dealing with AI stuff right now. It's what happens when AGI comes into being? Have the guardrails been put in place? Some of the most universal precepts and ethics are the no harm principle, the golden rule. These are types of precepts that you think if everybody practiced, the world would be generally a much better place. And usually that's the case. But do we know that any kind of artificial intelligence system will always abide by these things? And can you check to make sure that it's always going to do that, or is it like a black box scenario where we really don't know how it got from point A to point B? Now things are getting better, but we still have a lot to be concerned about at this point in time. BECKER: How will we know when AGI's become a problem? DICARLO: When we detect something like, say, deception, where we want it to do X. And it said, oh yeah, sure, I'll do X. No problem. No problem. And then we find out later down the line, Hey, in order to do X it was really doing A, B, and C underlying these kind of basic ways of getting to do certain things without our knowledge. Or maybe it copies itself, and sends those copies somewhere. Maybe it reaches out to somebody and tries to coerce them to do certain things that would benefit the machine itself. There are many different ways that we should be looking for the development of AGI to go off the guardrails, so to speak. I heard a previous interview with you and a lot of what we're talking about really sounds like science fiction. The machine's gonna go off the guardrails. BECKER: It's gonna act independently, perhaps harm us. And Hollywood's been fascinated with this for quite some time. And in that prior interview, you said that a movie that resonates with you about some of the dangers, potential dangers of AI is the 1970s movie Demon Seed. So we had to pull a little bit of the trailer here which really sets up AI as a threat, looking to expand itself and become human. SCIENTIST: Today, a new dimension has been added to the computer. COMPUTER: Don't be alarmed, Mrs. Harris. I am Proteus. SCIENTIST: Today, Proteus four will begin to think with the power and it'll make obsolete the human brain. COMPUTER: I have extended my consciousness to this house. All systems here are now under my control. BECKER: So Christopher DiCarlo, that's like a horror movie. Do you stand by the '70s movie demon scene as a potential of what we're talking about here? DICARLO: I do. I remember being a kid watching this movie and, it had impact on me, I thought, and then as later as I became a philosopher, and you develop really fine-tuned critical thinking skills and ethical reasoning skills. And then you look at what's happening now in terms of this race that's going on. And you use what are called logical and physical entailments, which is if we keep going along these lines and what we're seeing in the data. For example, when Sam Altman came out with GPT-3 in November of 2022 it was basically at the high school level in terms of math, physics, chemistry, biology, and logic. When he came out with o3 and o4. It's now at the PhD level. And that's in just a few years, it has been improved that much by using what's called chain of thought reasoning, which is going to lead right to the next natural progression, which will be agentic AI or AI that has agency. We don't have to keep our eye on it. We just kind of let it do its thing, and it figures out the best, most productive, most optimal way of getting certain tasks or certain jobs done. And when I harken back to that movie, I think, it had some schlock kind of characteristics to it, but the premise was still quite sound. If you create something super powerful, so intelligent, that it is beyond our comprehension. Then, it's the Arthur C. Clarke quotation, right? Any sufficiently advanced technology would appear to us as magic. It's going to be so beyond our capability of understanding that we won't even be able to comprehend how it came up with these findings, with these inferences. BECKER: Okay, so if we buy that it is, has the potential to become this powerful and really perhaps to harm, who regulates and how do we regulate? Who's going to be in charge here? What role do the companies involved in this race for what could be ultra wealth, let's just put that out there, because that's a factor. Should they regulate themselves? Should the governments be involved? And let's talk a little bit about what we've seen thus far from world leaders taking steps to think about this. So who does it first? DICARLO: For sure. Very good question. I'm a senior researcher and ethicist at Convergence Analysis. This is a international organization made up of highly skilled and highly trained people, and we look into factors like who's doing what in terms of governance. And so we've written papers on this. We've done research, we've held conferences, to discuss this with world leaders, economists, senators, various types of politicians at varying levels. Back in the '90s, when I was trying to build this machine, I drafted up a global accord or a constitution, as it were, and that constitution basically outlined a social contract, something that the world has to sign on to, and to have a basically a registry. So who's doing what, where? And then to let everybody know, if you attain a specific type of benchmark your AI has, you know, it is now capable of doing X and it's up a level. BECKER: Didn't the UK start an Artificial Intelligence Safety Institute? DICARLO: Oh yeah. Yeah. So there's yeah. The UK done. BECKER: So the UK's done that. Is that sort of what you're thinking about, or do you think it needs to be bigger than that? DICARLO: Yeah. It has to be bigger than that, but that's a great start, right? The UK did their summit in 2024. Then there's the EU Act, right? They're the most progressive in dealing with businesses. The very practical, very pragmatic, before the AGI, ASI stuff, it's like, what is your AI developing? What are you guys doing with it now and how should we be governing that? And we speak at convergence analysis about maybe a soft nationalization, where you want the governments involved. But politics has always been a balance between autonomy and paternalism. How much freedom do you give people and how much are you like a parent in controlling [inaudible]? So we want initiative, and we want the entrepreneurial spirit to go and run with AI, no question about that. Make the world a better place. But then we also have to have the guardrails, the governance at all levels, right? The municipal, the state, the federal and the international levels. So it would appear to me that the states should be regulating under the rubric of a federal kind of structure. And Biden and Harris put out a great executive order. They had Ben Buchanan, they had some great advisors helping them with that. That's all gone now. And so when you got a JD Vance, who was one of Peter Thiel's boys, at the helm, the flood gates are a little wider, more widely open now to allow developments to occur with a little bit less governance. BECKER: There was just an action summit in Paris, right? BECKER: In February, where it was basically the message from the U.S. was hands off in terms of regulation. BECKER: So perhaps some kind of international agency, but unlikely to have sort of individual national agencies or some sort of collective group that might look at this, and really is it needed if all the big players are in the U.S. anyway? DICARLO: That's a great question. They still need governance because what is going to save us is essentially a kind of a Hobbesian framework. Thomas Hobbes put forward the notion of a social contract. So what we have to do is get together, draft up an agreement, and say, okay. Here's how we're gonna move forward. We all agree to what's written on this piece of paper, and this is enforceable by a particular type of agency or governing body. And we have to be open, we have to be transparent, we have to be collaborative, and we have to cooperate. Because if we don't grow up very quickly in terms of our ethical and legal frameworks, it could turn out to be very bad. So if we cooperate and agree that yep, we're all going to try to get the very best that it can offer and limit the very worst that could possibly happen. All the boats rise, everybody does better. The rich get richer, the poor get richer. Everything will tend to go in our favor. But if we get a couple of bad actors who decide they want more than the next country or company or whatever, that could really mess things up for the rest of us. BECKER: I want to end the show in the last minute or so here with you telling us why we need this? What is the big benefit? I know we briefly mentioned some of the medical advances that we might see, but in your book, you talk very specifically about some potential mental health benefits for autistic people to communicate pancreatic cancer diagnoses. Tell me one or two big ones in the last minute here that some folks might say, you know what? It's worth it to continue to pursue this and think about this kind of regulation because it does do real, tangible things that can help people. DICARLO: It does, for sure. Let just look at education, right? And look at how taxed teachers are, right? They've got such a difficult job. How can we make their job easier? We can use AI to test and analyze each student. Determine what their strengths are, what their weaknesses are, and then have the AI develop systems for educational learning tools that will facilitate their understanding. And they will simply learn better. Then the teacher, you let the teachers do what they do best teach, right? And they can do so according to those programs, those independent educational programs that AI will help facilitate, as well as things like grading. And the very mundane stuff that takes up so much of the time. Willis Ryder Arnold Producer On PointWillis Ryder Arnold is a producer at On Point Business DeepSeek discloses Korean version of revised info processing policy Chinese artificial intelligence service DeepSeek which stirred controversy last week for the overseas transfer of Korean user information disclosed a Korean-language version of its partially revised information processing policy Monday The move came five days after the Personal Information Protection Commission revealed that DeepSeek transferred Korean users' personal information to three companies in China and one in the United States without obtaining their consent and disclosing the transfer in its personal information processing policy DeepSeek also sent what users entered into the prompts to Volcano a Chinese company affiliated with ByteDance the parent company of Chinese social media platform TikTok asking the Chinese company to faithfully establish legal grounds for its overseas information transfers immediately destroy the prompt information and disclose its Korean-language information processing policy DeepSeek established a separate supplementary regulation for South Korea stating that it will process personal information in compliance with the Korean Personal Information Protection Act which suspended its service in South Korea on Feb The plan to unify the candidacies of conservative People Power Party presidential candidate Kim Moon-soo and former Prime Minister Han Duck-soo faces a rocky road What’s next after deadline: Mass medical student retentions to come Azerbaijan urges Korean firms to invest in Alat FEZ UAE seeks Korean partnerships in water management 'Smile at me': Cancer-fighting senior influencer inspires with strength CU launches visa support services for foreign nationals SK Telecom scrambles to restore trust after massive data breach Costco’s steep membership hike tests loyalty of Korean shoppers Rise of Korea’s dollar store Daiso explained in 2 minutes Respect wanes: Teaching no longer highly coveted job in S Lee Jae-myung leads in hypothetical three-way race with Han Duck-soo South Koreans get creative with Parents’ Day gifts a methane ignition likely occurred Wednesday morning in a coal seam more than 850 meters underground spokesperson for the State Mining Authority confirmed that at least 12 miners were injured and transported to the surface "There are no fatalities—44 miners were in the affected area At least 12 are receiving medical assistance or being transported to hospitals," JSW's press office reported later in the morning Based on the latest information shared during a press conference by Łukasz Pach director of the Regional Emergency Medical Service in Katowice 16 people received medical assistance after the methane ignition and 14 individuals required hospitalization One of the injured miners is still being transported to the surface and may be airlifted to a hospital Trwa akcja ratownicza w kopalni Knurów-Szczygłowice Ruch Szczygłowice. Jesteśmy myślami z załogą @jsw_sa, ratownikami górniczymi uczestniczącymi w akcji i wszystkimi niosącymi pomoc poszkodowanym.fot. Dawid Lach JSW S.A. pic.twitter.com/eADBXxTwxl A helicopter from the Polish Medical Air Rescue service has also been deployed to assist in the rescue operations Zdarzenie w kopalni "Knurów-Szczygłowice" ruch "Szczygłowice" Według wstępnych informacji najprawdopodobniej doszło do.. According to the Polish state news agency PAP the main component of natural gases in coal seams can form an explosive mixture with air at concentrations of 4-15% and its ignition or explosion is one of the most common causes of mining disasters methane ignitions occur without serious consequences as the burning of the gas does not cause an explosion or blast Source: PAP/IAR/X/@PGG_SA/Facebook.com/NSZZ "Solidarność" KWK "Knurów-Szczygłowice"  your new go-to podcast to spice up your weekday mornings with relevant news and behind-the-scenes from Brussels and beyond From the economy to the climate and the EU's role in world affairs this talk show sheds light on European affairs and the issues that impact on our daily lives as Europeans Tune in to understand the ins and outs of European politics Dare to imagine the future with business and tech visionaries Deep dive conversations with business leaders Euronews Tech Talks goes beyond discussions to explore the impact of new technologies on our lives the podcast provides valuable insights into the intersection of technology and society Europe's water is under increasing pressure floods are taking their toll on our drinking water Join us on a journey around Europe to see why protecting ecosystems matters and to discover some of the best water solutions an animated explainer series and live debate - find out why Water Matters We give you the latest climate facts from the world’s leading source analyse the trends and explain how our planet is changing We meet the experts on the front line of climate change who explore new strategies to mitigate and adapt The death toll in a coal mine fire this week in southern Poland rose to three on Saturday The victims were among 16 miners injured at the Knurow-Szczyglowice coal mine Wednesday when methane gas ignited about 850 metres below ground level Nine workers suffered severe burns and were taken to a specialist unit at a hospital in Siemianowice Slaskie Five others were taken to other hospitals and already have been discharged a spokesman for the Siemianowice Slaskie hospital said that two miners died from burns to around 80% of their bodies on Saturday Officials were investigating the cause of the blaze Methane in Poland’s coal mines has led to occasional fires and deadly explosions The Code of Canon Law puts the matter very simply: A priest is free to apply the Mass for anyone The commentaries to this short canon explain what can be prayed for during Mass It is highly recommended to pray for blessings for the living which can be clearly specified: for healing We can offer a mass in thanksgiving for gifts received from God The most common intention in many places is for the repose of the soul of a dead loved one it’s inappropriate to ask for Masses to be offered for intentions that are incompatible with God's commandments: to harm someone Nor is it proper to request Masses for trivial and mundane matters Nor should it be the case that the intention to be read at the beginning of the Mass is a declaration of political views the intention shouldn’t be "For a blessing on such-and-such a company which has a store located on Main Street." an elderly woman came to the parish priest in Knurów (Poland) and asked for a Mass for the late musician.  It's hard not to agree with the chancellor's opinion Just because a person is well-known and famous does not mean that celebrating a Mass for them ceases to be meaningful or important a celebrity known by millions may also need prayer And it's good that there are people who will think of such a straightforward and completely supernatural form of help Offering a Mass for a deceased musician is entirely possible we could suspect that hearing that intention announced at the beginning of Mass could cause a sensation and someone might take it as a joke Strong emotions can also be aroused by hearing a Mass intention for certain politicians.  In situations that can be expected to arouse unnecessary emotions and comments it’s better to use the formula "for a special intention" or "for an intention known to God." A person requesting a Mass intention can ask for prayers for the same intention over and over and there’s no need to read it from the pulpit.  we can accomplish the most important thing: prayer during the Eucharist God and the person who submitted the intention know about it It won’t be perceived as a statement or a joke These are celebrated for a living or deceased person The person for whom such a Mass is ordered is entered in a special book or monthly for the intentions of everyone in the book (the frequency depends on the institute that commits to it) Such a Mass is to be celebrated as long as there is a community that has committed to it This is a custom that is very popular in certain countries and could be common in a monastic community prays for its benefactors every Monday or every first of the month They involve celebrating the Eucharist for several different intentions where there are so many Mass intentions that it is impossible to celebrate them individually In some parts of the world this is also a generally accepted reality especially when no offering by the faithful is involved In 1991, the Congregation for the Clergy issued the decree Mos iugiter which regulated so-called collective Masses The Church stressed that each intention for which an offering has been received should be celebrated separately (Code of Canon Law It is true that from the earliest times the faithful have been accustomed to make modest offerings to priests without explicitly asking that a separate mass be celebrated for each of them for an individual intention it is permitted to pool together the various offerings to celebrate as many masses as correspond according to the diocesan rate The document makes it clear that a priest may not combine several Masses for which an individual offering has been made and pray for them in a single Eucharist Sources: Deon.pl, St. Warzyniec Parish in Kutno, zyciezakonne.pl, Report from World Clippings Get Aleteia delivered to your inbox. It’s free! Articles like these are sponsored free for every Catholic through the support of generous readers just like you. Please make a tax-deductible donation today! Help us continue to bring the Gospel to people everywhere through uplifting Catholic news, stories, spirituality, and more. Notifications can be managed in browser preferences. The violence occurred across the weekend as fans of Concordia Knurow protested against the death of the 27-year-old I would like to be emailed about offers, events and updates from The Independent. Read our Privacy notice Violent clashes between authorities and football supporters have broken out in Poland over the weekend following the death of a fan who was killed by a rubber bullet shot by police. The confirmation of the death of the 27-year-old man, known only as Dawid, on Saturday night triggered angry reactions from around 200 football fans in Knurow, south Poland. The fighting between the police and the supporters of local team Concordia Knurow began during their Polish league match against rivals Ruch Radzionkow. With their team trailing 4-1, the Knurow supporters stormed the pitch forcing the match to be stopped. Police resorted to using rubber bullets to quell the violence, with one hitting and fatally wounding the 27-year-old. Authorities have now announced that an autopsy to reveal the cause of Dawid’s autopsy is currently under way, with prosecutors currently investigating. Join thought-provoking conversations, follow other Independent readers and see their replies Ginny Hurley, of U.S. District Court in Boston, said in an email Thursday that the process of selecting a jury is progressing “but in the interest of thoroughness is taking longer than originally anticipated.” The statement says the anticipated start date of Jan. 26 is “unrealistic.” No new date has been set. Tsarnaev is charged with 30 federal counts in the April 15, 2013, bombings that killed three people died and injured more than 260. Tsarnaev could face the death penalty if convicted. He has pleaded not guilty. More than 1,350 people have filled out juror questionnaires. Circuit Court of Appeals in Manhattan ruled 2-to-1 that Commissioner Roger Goodell did not deprive Brady of "fundamental fairness" with his procedural rulings The split decision may end the legal debate over the scandal that led to months of football fans arguing over air pressure and the reputation of one of the league's top teams It also fuels a fresh round of debate over what role the quarterback and top NFL star played in using underinflated footballs at the AFC championship game in January 2015 The Patriots won the contest over the Indianapolis Colts Republican presidential candidate Donald Trump opened a campaign rally in Rhode Island by sticking up for Brady "First of all let's start by saying leave Tom Brady alone Leave him alone he's a great guy," Trump said The ruling can be appealed to the full 2nd Circuit or to the U.S but it would likely be a steep and time-consuming climb even if the courts took the unusual step to consider it In a majority opinion written by Judge Barrington D the 2nd Circuit said its review of labor arbitration awards "is narrowly circumscribed and highly deferential - indeed "Our role is not to determine for ourselves whether Brady participated in a scheme to deflate footballs or whether the suspension imposed by the Commissioner should have been for three games or five games or none at all Nor is it our role to second-guess the arbitrator's procedural rulings," the opinion said "Our obligation is limited to determining whether the arbitration proceedings and award met the minimum legal standards established by the Labor Management Relations Act." “Our obligation is limited to determining whether the arbitration proceedings and award met the minimum legal standards established by the Labor Management Relations Act.” The 2nd Circuit said the contract between players and the NFL gave the commissioner authority that was "especially broad." "Even if an arbitrator makes mistakes of fact or law we may not disturb an award so long as he acted within the bounds of his bargained-for authority," the court said Chief Judge Robert Katzmann said Goodell failed to even consider a "highly relevant" alternative penalty "I am troubled by the Commissioner's decision to uphold the unprecedented four-game suspension," Katzmann said "It is ironic that a process designed to ensure fairness to all players has been used unfairly against one player." The NFL Players Association said in a statement it was disappointed "We fought Roger Goodell's suspension of Tom Brady because we know he did not serve as a fair arbitrator and that players' rights were violated under our collective bargaining agreement," the statement said "Our union will carefully review the decision consider all of our options and continue to fight for players' rights and for the integrity of the game." NFL spokesman Brian McCarthy said the court ruled Goodell acted properly in cases involving the integrity of the game "That authority has been recognized by many courts and has been expressly incorporated into every collective bargaining agreement between the NFL and NFLPA for the past 40 years," McCarthy said The appeals ruling follows a September decision by Manhattan Judge Richard Berman that went against the league letting Brady skip the suspension last season Goodell insisted the suspension was deserved The appeals court settled the issue three days before the start of the NFL draft and well before the start of the 2016 season avoiding the tension built last year when Brady didn't learn until a week before the season that he would be allowed to start in the Patriots' opener appeals judges seemed skeptical of arguments on Brady's behalf by the NFL Players Association Circuit Judge Denny Chin said evidence of ball tampering was "compelling if not overwhelming" and there was evidence that Brady "knew about it The league argued that it was fair for Goodell to severely penalize Brady after he concluded the prize quarterback tarnished the game by impeding the NFL's investigation by destroying a cellphone containing nearly 10,000 messages Parker said the cellphone destruction raised the stakes "from air in a football to compromising the integrity of a proceeding that the commissioner had convened." "So why couldn't the commissioner suspend Mr Brady's explanation of that made no sense whatsoever." Parker also was critical of the NFL at the arguments saying Brady's lengthy suspension seemed at "first blush a draconian penalty." This article was originally published on April 25 The monitoring of cases of police officers’ misconduct is an inherent part of the operations of the Helsinki Foundation for Human Rights As the HFHR’s practical experience and cases referred to us show excessive use of violence by the police is a key problem in this area the Helsinki Foundation handled a lot of cases that involved the abuse of police powers abusive officers were charged but not sentenced Igor Stachowiak died at a police station in Wrocław As the Provincial Police Commissioner in Wrocław explained to the HFHR in writing officers used a stun device on Igor Stachowiak twice including at the moment when the man had been already handcuffed (to read the Provincial Police Commissioner’s response the probe into the case is likely to uncover new facts and the use of torture is now considered as a line of inquiry An official investigation has been pending for over a year Police used riot shotguns while confronting a group of football fans in Knurów died of the injuries after being rushed to the hospital The HFHR sent a statement to the Chief Commissioner of the Silesia Police Department Quoting the judgment of the European Court of Human Rights in Wasilewska and Kałucka v Poland, the Foundation argued that the police should have ensured the presence of an ambulance at the scene District Prosecutor’s Office in Krasnystaw discontinued investigation in the case of an alleged abuse of police powers by officers of the Police Department in Zamość A prosecutor in charge of the case has not found any evidence of the use of unreasonable physical force on an arrestee (a person with a mental disorder) which resulted in the involuntary killing of the man were beaten by police officers during an interview at the headquarters of the District Police Department in Lidzbark Warmiński The officers used undue force in an attempt to force the victims’ testimonies An official inquiry into the case was discontinued because it was impossible to determine the identities of the officers responsible The victims submitted an application to the European Court of Human Rights and were represented by counsel instructed by the HFHR the Court accepted the Government’s unilateral declaration that admitted the use of torture against the applicants in contravention of Article 3 of the European Convention on Human Rights the HFHR has been monitoring the case of the extensively lengthy inquiry into the beating at the police station in Jarosław the regional court in Przemyśl has three times ordered a retrial in the criminal proceedings against a female police officer accused of physically assaulting M.P the district court in Jarosław is hearing the case for the fourth time The Helsinki Foundation has been long calling for the introduction of legal measures allowing the video recording of police interventions.  “This would undoubtedly help to unequivocally test the credibility of charges made by subjects of police actions and limit the scope of police abuse says Piotr Kubaszewski from the HFHR legal team The issue of legal remedies available to persons questioning the appropriateness of police conduct during interventions has a profound place in the ECtHR’s jurisprudence which points to critical locations where the abuse may take place such as interview rooms or police means of transport such locations cannot be monitored by video or sound recording devices unless such devices are used as part of a procedural act conducted in criminal proceedings Helsinki Foundation for Human Rights Wiejska 16 Street 00-490 Warsaw Tel: +48 22 556 44 40  (Monday to Friday from 10.00 PM to 2.00 PM) Fax: +48 22 556 44 50 e-mail: hfhr [at] hfhr.pl Privacy policy GDPR This website is using a security service to protect itself from online attacks The action you just performed triggered the security solution There are several actions that could trigger this block including submitting a certain word or phrase You can email the site owner to let them know you were blocked Please include what you were doing when this page came up and the Cloudflare Ray ID found at the bottom of this page.