Connecting decision makers to a dynamic network of information
Bloomberg quickly and accurately delivers business and financial information
FC Barcelona badge sits on display in a corner flag at Estadi Olimpic Lluis Companys in Barcelona
burnishing the South Florida city’s reputation as an American soccer hub
The Miami Downtown Development Authority, an agency tasked with promoting the city’s urban core, helped close the deal with a business incentive grant, FC Barcelona said in a statement Wednesday without providing financial details
JDA Bourgogne Dijon Handball put a triumphant finish to their EHF European League Women 2024/25 campaign Sunday by beating HSG Blomberg-Lippe 32:27 in the third-place match at the EHF Finals Women 2025 in Graz
It was the best international result for Dijon since they came runners-up in the Challenge Cup 20 years ago
who had met Blomberg already twice before this season
with their German opponents winning both their duels in the group phase: 35:30 in Germany and 28:27 in France
when Nieke Kühne netted 10 times in each match
The EHF Finals Women 2025 at Raiffeisen Sportpark in Graz conclude at 18:00 CEST with the big final between Ikast Håndbold and Thüringer HC
HSG Blomberg-Lippe (GER) vs JDA Bourgogne Dijon Handball (FRA) 27:32 (10:16)
Red-hot Claire Vautier leads Dijon to bronze medal
The final tournament of a European club competition is always full of surprises
and JDA Bourgogne Dijon Handball’s Claire Vautier certainly had not expected that her first EHF Finals Women would pan out the way it did this weekend in Graz
In Saturday’s semi-final against Thüringer HC
Vautier was the leading scorer for her team with five goals until she was sent off with a red card with still 18 minutes left to play
But in Sunday’s third-place match against another German side
Vautier made her presence on court felt again right away when she scored five times in less than 20 minutes to help Dijon to a healthy lead
who has been playing for the club since 2020
contributed eight goals to raise her season tally to 41 since the start of the group phase
You said no to using cookies or similar tools that help improve your experience. To see this content, you would need to update your Privacy Preferences
SUMMARY: Thüringer HC are crowned in Graz as the new EHF European League Women champions
following an intense final against Ikast Håndbold
FLASH QUOTES: Comments from players and coaches after the EHF Finals Women 2025 final and third place match
FINAL PREVIEW: Ikast Håndbold meet Thüringer HC in final; Blomberg and Dijon play for third place
SUMMARY: Ikast Håndbold and Thüringer HC celebrated semi-final wins in Graz as they get ready for the final on Sunday
FLASH QUOTES: Comments from players and coaches after the EHF Finals Women 2025 semi-finals
SEMI-FINAL REVIEW: The German side beat Dijon 35:29 in the second semi-final of the EHF Finals Women
SEMI-FINAL REVIEW: The 2023 champions are back in the final of the EHF Finals Women in Graz
FEATURE: Thüringer HC left back becomes MVP and top scorer of EHF Finals Women after netting 29 times in Graz
FINAL REVIEW: THC come from behind to beat Ikast 34:32 at the EHF Finals in Graz
Lui Blomberg ’25 will perform his senior piano recital on March 23 at 3 p.m
in Wilson Concert Hall in the Lenfest Center for the Arts
Blomberg is an engineering and music double major from Eugene
The performance is free and open to the public and will be streamed online at https://go.wlu.edu/livestream
Blomberg will perform works of solo piano repertoire
28” and John Field’s “Nocturne in D minor.” The recital will also feature Blomberg’s own compositions performed by an ensemble of string and wind players
Blomberg has studied piano under Akiko Konishi
and composition under the direction of Terry Vosbein
composition and music theory and director of the University Jazz Ensemble
Please select what you would like included for printing:
Copy the text below and then paste that into your favorite email application
Ted and Joyce; sister Bonnie; and infant brother
December 1 at Mueller-Bies Funeral Home-Lino Lakes
Enter your phone number above to have directions sent via text
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply
Service map data © OpenStreetMap contributors
Share on FacebookShare on X (formerly Twitter)Share on PinterestShare on LinkedInMADISON
(WMTV) - Wisconsin women’s softball beat Indiana 6-5 at Goodman Softball Diamond on Friday night
The Badgers took a 3-0 lead at the end of the first
thanks to a two single from Hilary Blomberg that scored two
The Verona grad added a two-run homer in the sixth to put the Badgers up 6-3
Indiana had rallied from a three-run deficit to make it a 4-3 Wisconsin lead in the top of the sixth
Blomberg finished the game 2-3 with four RBIs
pitching 5.2 innings and notching her eighth win of the season
UW and IU will play again on Saturday at noon
Click here to download the WMTV15 News app or our WMTV15 First Alert weather app
Burial is in Rock Island National Cemetery
The former Jean Elaine Larson was born January 21
the daughter of Edwin and Lenore (Stewart) Larson
She graduated from Moline High School in 1945 and went on to marry the love of her life
Jean attended Augustana College before becoming an executive secretary at Iowa-Illinois Gas and Electric Company
She was a longtime member of First Lutheran Church
she served on the board and volunteered at Friendship Manor
she looked forward to visits from her daughter
She is survived by her daughter and son-in-law
Kimberlee (Larry Covents) Blomberg-Covents of Moline; her sister
She was preceded in death by her parents; her husband
The family would like to thank everyone at Friendship Manor’s Silver Cross for the wonderful care given to Jean and invites friends to share stories and condolences at TrimbleFuneralHomes.com
This website is using a security service to protect itself from online attacks
The action you just performed triggered the security solution
There are several actions that could trigger this block including submitting a certain word or phrase
You can email the site owner to let them know you were blocked
Please include what you were doing when this page came up and the Cloudflare Ray ID found at the bottom of this page
died on January 30 at the Rivercross Hospice in Wichita
she was the daughter of James Oliver and Gayle( Gleason) Willson
She graduated from Kansas State University with a nursing degree
She was drawn to the East End for a job opportunity at Stony Brook Hospital and to help her mother at the Gayle Willson Gallery in Southampton
she was a fixture alongside her Mom at “ The Gallery” on Job’s Lane
she met Ron von Blomberg of Southampton and they wed in Jamaica in 1991
Traveling often to Jamaica and Louisiana with family
her fond memories go back to her years growing up on Lake Quivira
she often spent her childhood traveling the world
She enjoyed the simple times with her GrandPop Pete in Bosier City
She was an amazing chef and enjoyed cooking for her family and friends
Gayle Willson of East Hampton; and her father
She is survived by her daughter Lily von Blomberg of Southampton; sibling Amy Patton of Germantown
Austin Gleason and wife Georgie of Shreveport
Donations in her name can be received at the St
or Rivercross Hospice in Wichita (rivercrosshealthcare.com)
You've read 4 of your 7 free articles this month
Please log in or create an account to continue reading
To purchase photos visit http://photos.newtownbee.com
Chief Blomberg was en route to the annual Newtown Board of Fire Commissioners meeting in Sandy Hook when he was struck by a vehicle and succumbed to his injuries
He joined older brother Robert and younger sister Judi
His family moved to Newtown when he was 5 years old
and he lived here for the rest of his life
Pete graduated from St Rose of Lima Grammar School and Newtown High School
He apprenticed as a cabinet maker and became an accomplished finish carpenter
a trade he continued practicing throughout his life
he assumed a maintenance position with the Newtown School System
until his retirement a short six weeks ago
Pete began his fire service career over a half-century ago with Hawleyville Fire Company
and then with United Fire Company of Botsford (now Botsford Fire Rescue)
where he rose through the ranks to become First Assistant Chief
Pete was also a certified State Fire Instructor
participating in many of the regional Firefighter 1 and 2 classes
and served as a certified Station Evaluator at numerous regional evaluation sites
He had a reputation of being very fair when interacting with students who enjoyed the privilege of his instruction and subsequent evaluations
Pete was among the first responders at fire scenes to strap on an air pack and head into burning buildings
He had a spectacular record of responding to nearly all the calls for service Botsford Fire Rescue received and was often among those first on the scene at calls in other districts
Pete kept the community of Newtown foremost in his thoughts throughout his life
second only to his devoted and loving family
He never said No when there was a call for help
whether it came in the line of duty or from someone he learned who just needed a helping hand
a devoted grandfather to Everett — the apple of his grandfather’s eye — and was a cherished friend
and brother firefighter to countless others
Pete was always stationed in a central location permitting him to be available for many calls throughout the community
a lover of rock music — having worked concert security for years — a Harley-Davidson enthusiast
He had a vitality and life force that his family admired and looked up to
His daughters knew his was always a safe place to go
One of his daughters remembered sitting with him at a marina recently
Pete was very serious and passionate about his service to the community
yet able to balance that with jokes and socializing with others
His sister this week said he could talk to anyone and anything
He wore his fire shirts while visiting his family in Florida so that it would spark conversation with others
especially those he found who were also wearing similar apparel
he recently constructed a playground with his son-in-law Ed for his grandson Everett
He and his brother enjoyed going to auto races
Pete is survived by his brother and sister-in-law Robert and Ann Blomberg; sister and brother-in-law Judi (Blomberg) and Rudy Baez; daughter and son-in-law Lindsey and Mike Rovinsky
daughter and son-in-law Sara (Blomberg) and Ed Vail
and stepdaughter Amanda Currier and husband Philip Read; grandson Everett Vail; nieces and nephews Camelia Buonaugurio
Burial will follow at St Rose Cemetery in Sandy Hook
Catholic interment will be followed by firematic graveside service
The procession escort from the church to the cemetery will include Newtown apparatus only
A media staging area will be set up on Monday
All media is asked to respect the wishes of Pete’s family and fellow firefighters
Pete’s family asks that those who would like to make a donation in his memory are asked to consider one of the Newtown emergency services of their choice or St Rose School
You must be logged in to post a comment
Your request has been blocked by our security system due to potential security concerns
Please contact us for assistance
Thanks for visiting
The use of software that blocks ads hinders our ability to serve you the content you came here to enjoy
We ask that you consider turning off your ad blocker so we can deliver you the best experience possible while you are here
Type a search term or query below and press enter
from the University of South Carolina School of Law
Daniel clerked for the South Carolina Attorney General’s Office
served on a South Carolina Supreme Court task force
Michelle Childs of the Circuit Court for the Fifth Judicial Circuit as a part of the Judicial Observation and Education program
Daniel received his undergraduate degree from Columbia International University
He and his wife have five children and too many animals
Subscribe to receive our monthly newsletter and breaking news updates
An official website of the United States government
the Zablocki VA’s Northeast Wisconsin Ambulatory Clinics have their own chaplain
a former pastor with experience in rural settings
became NEWAC’s first chaplain in June
where he was a pastor as well as a VA chaplain in Cheyenne for 13 years
“We were excited to come back to the trees and the water,” Blomberg said
Blomberg ran a dairy farm in Wisconsin for several years before taking on his first pastoral job in Iowa
After receiving his Master of Divinity degree from Denver Seminary in Colorado
he took on his first senior pastorship in Tioga
(10 years) before he began to “feel a passion for chaplaincy,” he said
He went back to school to receive his Certificate of Chaplaincy from Denver Seminary and landed at the Cheyenne VA
“My dad served in World War II and came back with a shell on his back
so I saw the effects of war,” he said
noting that his father’s wound never healed
and he spent his life dealing with the pain.
he didn’t speak of the war or his time in the military
He worked as an electrician and helped raise five boys
“I saw what war does to people with my dad
I think that’s given me more compassion for Veterans from all walks of life,” he said
“I developed a love and passion for Veterans
“I know not everyone's going to live their life like my dad did,” he said
but I can walk with others and try to encourage them
help them and let them know they're not alone in this
So I'm grateful for that opportunity.”
working with Cheyenne’s far-flung clinics
A typical day saw him in a clinic in the morning before hitting the road to meet with Veterans in their homes
“I had a lot of days where I would get up in the morning and it was dark
and by the time I got home at night it was dark,” he said
While he won’t be logging those kinds of miles in northeastern Wisconsin
his expertise in working with rural Veterans will be a boon in his new position
Blomberg will split his time between the three clinics — Green Bay
Blomberg said his goal is to develop relationships with Veterans and let them know he is there for them
“Distance is a huge barrier,” he said
“We try to find other ways to make that happen
You make yourself available or meet virtually
You find things that are workable for certain situations
“Hanging out with Veterans and just being available (are important),” he said
“Really taking the time to let them know that when things happen … they have a chaplain they can turn to.”
Chaplains can serve as a sounding board — not just for Veterans
he said — providing solace in times of stress and facilitating prayer when needed
“Faith looks differently for all people
We basically walk with them where they’re at,” he said.
Chaplains also provide support for VA staff
and Blomberg said the NEWAC staff is adjusting well to having a chaplain they can rely on
“That’s as valuable as the work I do with Veterans,” he said
“Lots of staff are unfamiliar with what chaplains do
so they’re trying to figure this out too
very encouraging and very supportive of chaplain services.”
chief of chaplain services for the Milwaukee VA
said hiring a chaplain for NEWAC was long overdue
as VAs are required to have chaplains serving all outpatient clinics.
He noted that Blomberg built Cheyenne’s outpatient clinic chaplaincy program
“He brings that level of expertise,” Allen said
“He’s a terrific fit and a tremendous plus for us
The staff is head over heels about Chaplain Don.”
Share on FacebookShare on X (formerly Twitter)Share on PinterestShare on LinkedInNEWTOWN
CT (WFSB) - Wake and funeral arrangements have been announced for an assistant fire chief who died in Newtown
The Connecticut Fire Chiefs Association confirmed that 72-year-old Peter Blomberg
who served as assistant chief of the Botsford Fire Department
28 as he crossed the street to attend a commission meeting
Blomberg will be honored on Sunday and Monday
His family planned to host a wake on Sunday at the Honan Funeral Home in Newtown from 2 p.m
The funeral was set for Monday at 10:30 a.m
Blomberg volunteered in Newtown for 50 years
It happened on Church Hill Road at its intersection with Dayton Street
The driver of a Toyota Tacoma pickup truck remained at the scene
The fire chiefs association posted a photo on Tuesday morning
It marked the second time in less than a week that the state lost a first responder
A procession of local first responders traveled to the funeral home on Tuesday afternoon
“It’s unimaginable to comprehend that less than 8 hours [after] laying [firefighter Robert] Sharkevich to rest
the CT Fire Service has suffered another tragic loss,” the Connecticut Fire Chiefs Association said
Chief Peter Blomberg of the Botsford Fire Rescue
tragically lost his life being struck by a vehicle
while crossing the street to attend a fire commission meeting.”
It asked that folks keep the Blomberg family and Newtown area emergency services in their thoughts
Bill McAllister recalled the dedication of Blomberg and followed him in his role as Botsford fire rescue’s first assistant chief
“Pete kept the community of Newtown foremost in his thoughts throughout his life
Hans Reelick from Hollandia Nurseries dropped off flowers to make a memorial for Blomberg
You pray for the people because they’re really going out there to save our lives,” Reelick said
“Pete was a dedicated firefighter in Newtown for many years
and we are all deeply shocked and saddened by this news,” added Newtown First Selectman Jeff Capeci
Newtown and all of its volunteer fire stations and members mourns this horrible loss
We pray for the Blomberg family and offer condolences to all who serve at Botsford Fire Rescue and all first responders who put their lives at risk to protect our community.”
Hwang said the crash which resulted in Blomberg’s death remained under investigation
A longtime resident with approximately 50 years in the fire service died Monday night after being struck by a motor vehicle
Botsford Fire Rescue Assistant Chief Pete Blomberg
was on his way to the Board of Fire Commissioners annual dinner meeting
He was one of two members representing his company
heading to the restaurant where this year’s meeting had been planned
Newtown Volunteer Ambulance Corps and a paramedic were all dispatched to Church Hill Road near Dayton Street around 6:15 pm
Newtown police said Blomberg was hit by a Toyota Tacoma traveling east on Church Hill Road
Blomberg was reportedly crossing from the south to the north side of the road when he was struck
He was transported to Danbury Hospital but did not survive his injuries
The road was quickly closed to all traffic between its intersections with Walnut Tree Hill Road and Sandy Hook Center
Travelers were detoured around the scene for hours
Newtown Police Department Accident Reconstruction Team
Detective Bureau and Drone Unit all assisted with the investigation
Newtown Police said Tuesday morning that the driver of the truck remained on the scene and has been cooperative with the investigation
The name of the driver has not been released
Botsford Fire Rescue issued a brief statement on its Facebook page late Monday night
asking “for your thoughts and prayers and appreciate the outpouring of support thus far
We offer both our sincerest condolences and gratitude to the Blomberg family in this tragic time.”
Newtown Police Department is asking anyone who witnessed the incident to contact Detective Chelsea Harold at 203-426-5841
As news of Pete Blomberg’s death spread this week
fellow first responders began working with Botsford Fire Rescue members to honor and remember him
Members of other companies have been at the Botsford station all week
members of all five of the town’s fire companies and Newtown Volunteer Ambulance Corps assembled on Main Street in front of Honan Funeral Home
With at least 14 fire trucks plus additional emergency response vehicles lined up along the thoroughfare’s eastern shoulder
first responders stood shoulder to shoulder on both sides of the road as a procession arrived
Newtown Police Department and Connecticut State Police cruisers were escorting the hearse carrying First Assistant Chief Blomberg’s body
Additional vehicles carried family and fire company members
who followed the hearse after it left the State Office of the Medical Examiner in Farmington
Fire companies and departments between that central Connecticut city and Newtown had their ladder trucks on I-84 overpasses
where large American flags waved as the vehicles passed below them
By that time the local first responders were standing at attention
Newtown Police closed the section of Main Street between the flagpole and Hanover Road/Schoolhouse Hill Road while the procession made its way to the funeral home
as first responders were clearing from that detail Hook & Ladder was dispatched to a possible outside fire
There was something poetic about the sirens sounding and firefighters jumping to action so soon after Blomberg’s body had arrived
Blomberg’s calling hours are planned for Sunday afternoon
His funeral is scheduled for Monday morning at St Rose of Lima Church
Pete Blomberg lived 67 of his 72 years on earth in Newtown
“he loved Botsford,” where he’d lived for decades
Blomberg’s service with the town’s fire companies began “about 50 years ago,” according to longtime friend and fellow firefighter Bill McAllister
“He joined Hawleyville in his younger days
He and a friend rented an apartment in Hawleyville
so he joined them,” McAllister told The Newtown Bee
McAllister and Blomberg were friends for four decades
and the families spent a lot of time together
For the first part of their academic career
the Blomberg daughters attended St Rose School
Blomberg and McAllister were both members of the school’s Grounds Committee
“It was the old building,” McAllister said this week
“On rainy days the roof would leak through the old skylights and on hot days the kids would be so hot
Pete and I would either be putting buckets down to catch the water or running to the firehouse to get fans to cool everyone down,” he said
“I can’t tell you how many times we did that.”
It was while raising his daughters that Blomberg also joined Botsford’s fire company
“He was as active as humanly possible while raising his daughters,” McAllister said
“His size and strength were always an asset on the fireground
he was always comfortable with ladders too
Blomberg joined southern Newtown’s fire company when it was still called United Fire Company of Botsford
The company was renamed Botsford Fire Rescue in 2008; Blomberg was well on his way up the line of command by then
He served as an engineer before being elected Second Assistant Chief and then First Assistant Chief
Botsford Fire Chief Andrew White said Blomberg was a dependable person and a good leader
“He balanced things very well,” the fire chief added
“He had a few years on some of us and he had a different perspective on things
“He always pursued fire education,” according to his friend McAllister
“We sent him to a national extrication class in Ohio one year
He took time off from work to go out there
and came back with more knowledge to share.”
Blomberg eventually became a certified State Fire Instructor
leading regional Firefighter 1 and 2 classes
He also served as a certified Station Evaluator at numerous regional evaluation sites
he continued to be a top responder to calls
A statement issued this week by Botsford Fire Rescue noted Blomberg’s half-century of service
The company noted their leader was until very recently “among the first responders at fire scenes to strap on an air pack and head into burning buildings
He had a spectacular record of responding to nearly all the calls for service Botsford Fire Rescue received and was often among those first on the scene at calls in other districts.”
Blomberg had apprenticed as a cabinet maker and became an accomplished finish carpenter
“a trade he continued practicing throughout his life,” the fire company statement also noted
Later in life he took a maintenance position within Newtown Public Schools
“he was always stationed in a central location permitting him to be available for many calls throughout the community,” the company said this week
First Selectman Jeff Capeci said the longtime Newtown resident “had a warm and friendly personality that will be deeply missed by both the Newtown fire community and the wider town alike
His dedication to our Newtown truly makes it a better place for everyone.”
Board of Fire Commissioners Chair Patrick Reilly extended his condolences on behalf of that board to Blomberg’s family
“was one of those guys that everyone knew and liked
“There is a big hole at Botsford” Fire Rescue
While the friendly and respected septuagenarian will be missed in the fire service
his friendship and personality is also being mourned
“Pete was one of the funniest dudes you’d ever meet,” White said Wednesday
He knew how to get people to do things the right way
Managing Editor Shannon Hicks can be reached at shannon@thebee.com
You must be logged in to post a comment
CT (WFSB) - Connecticut has said goodbye to a fallen hero in Newtown
Assistant fire chief Peter Blomberg’s funeral got underway on Monday morning
Blomberg was killed last week when he was hit by a car while crossing the street
He was on his way to a fire commissioners meeting
“I am comforted in knowing he was not alone in his last moments
He was surrounded by many first responders,” said Bob Blomberg
“It is impossible to conceive that I will never talk to him again.”
most recently as the assistant fire chief with the Botsford Fire Department
we’re a big family and brothers and sisters in the fire service,” said Kenny Walker
The wake for Blomberg was held in Newtown on Sunday
The outpouring of grief then continued on Monday at 10:30 a.m
when Blomberg’s family and friends hosted his funeral at the St
say he was always there to help those in need
adding he responded to 2 fire calls the day he died
a touching story from Monsignor Robert Weiss detailed Blomberg’s lifelong dedication to his community
including in the days following the Sandy Hook tragedy
this man was here at all eight of the funerals that took place in this church
so present to those people who were so broken
those are the things we remember,” said Rev
Blomberg was set to be laid to rest with honors at the St
described as the apple of his grandfather’s eye
YWCA Greenwich has announced that Jill Blomberg will be the recipient of the 2024 Purple Ribbon Award which will be presented at YWCA Greenwich’s Annual Candlelight Vigil on Tuesday
The Purple Ribbon Award honors an individual who has made significant contributions to the lives of domestic violence survivors in our community
has been working with YWCA Greenwich’s Harmony Project staff since 2018
providing guidance on how domestic violence plays out in civil and family court cases and how abusive partners are able to use the legal system against victims
making life-saving referrals to the Harmony Project for survivors of domestic violence
Clients share how Jill understands and validates their safety concerns
We congratulate Jill Blomberg on this well-deserved honor and thank her for her dedication to survivor support” said Harmony Project Director Jessie DiMuzio
In addition to the presentation of the Purple Ribbon Award
YWCA Greenwich will memorialize the 26 people who lost their lives to domestic violence in Connecticut last year
as well as honor the strength and resilience of survivors
The Candlelight Vigil is a free community event
If you or someone you know has experienced domestic abuse and/or sexual violence
please call Harmony Project’s 24 hour hotline at 203-622-0003
For more information, please visit: www.ywcagreenwich.org
Email:editor@greenwichsentinel.com
Newtown Hook & Ladder and Sandy Hook Volunteer Fire & Rescue were already planning to do the annual summer-to-winter flag changeover on Saturday
When the lowering of the flags was announced by the governor
the fire companies decided to take advantage of the timing
members of Newtown Hook & Ladder and Sandy Hook Volunteer Fire & Rescue were joined by Botsford Fire Rescue Chief Andrew White
The firefighters were also joined by members of Newtown Lions Club
who regularly assist on the changing of the flag
The 20- by 30-foot summer flag was lowered first
ready to catch the flag before it touched the ground
They quickly moved to the sidewalk in front of Newtown Meeting House
an 18- by 24-foot winter flag was raised to the top of the 100-foot-tall pole
With assistance from Hook & Ladder Firefighter Rachael Warburton
Chief White then lowered the flag to half-staff
Firefighters raised the flag to full staff Tuesday morning
Lamont last week noted Blomberg “dedicated his career to fire prevention and the safety of our communities
and his line of duty death is an awful tragedy
My prayers and condolences are with his family and friends
his fellow firefighters who serve with Botsford Fire Rescue
and all first responders who selflessly serve the public.”
Managing Editor Shannon Hicks can be reached at shannon@thebee.com
You must be logged in to post a comment.
Connecting decision makers to a dynamic network of information, people and ideas, Bloomberg quickly and accurately delivers business and financial information, news and insight around the world
ColumnistHigher Rents Are Coming If Interest Rates Don’t BudgeAn expected drop in US apartment supply has landlords planning hikes
hopes for an increase in housing construction were pinned on lower borrowing costs
But with longer-term interest rates remaining stubbornly elevated and the Federal Reserve showing no urgency to ease policy
higher rents and home prices will be needed to drive an increase in production
That’s grim news for renters and would-be homebuyers alike
but it’s the reality of the situation at a time of lofty construction and financing costs
Thanks for visiting
Sign in
Join now, it's FREE!
Ann Louise Blomberg passed away peacefully at Georgian Bay General Hospital in Midland surrounded by the love and support of her family on February 28
Ann is lovingly remembered by her husband Owen Harries
her two children Gregory (Kate) and Gillian (Thomas)
Ann graduated from Queen’s University with a Bachelor of Nursing degree in 1982
She worked in Edmonton as an RN in 1982 but soon realized that she was ready for another challenge
She attended McMaster Medical School 1983 – 1986 and completed Family Practice internship and residency at McMaster University 1986 – 1988
She worked at Waypoint Centre for Mental Health from 1988 until retirement in May 2022
She specialized in Geriatric Psychiatry and Dual Diagnosis (Developmentally Delayed)
Ann’s proudest accomplishment was raising Greg and Gillian
“a humbling experience.” As a mother she encouraged sports and music and a love of the outdoors
She loved being in nature and endeavoured to go outdoors as often as she could
Her favourite place was Awenda Provincial Park where she has fond memories of camping with friends
and hiking and cross-country skiing with Nancy Hargrave
She took up piano lessons again in her fifties to learn different genres of music such as jazz and Latin
she performed in recitals and the Midland Music Festival
She was loved and supported by her Friends in Faith and Knox Church Midland family
who strengthens me,” Philippians 4:13
She loved to travel and to learn the history of new places
She was especially close to Mary and Stephen Barber with whom she and Owen went on many ski trips as well as trips to Italy
She would like to thank the many friends and family who brought food for her and Owen
Sara Rask of the Hudson Regional Cancer Program for her excellent collaborative care
The family requests that any donations be given to one of the following organizations that were important to Ann: The Canadian Cancer Society, Knox Presbyterian Church Midland, the Hudson Regional Cancer Centre in Barrie, and Georgian Bay General Hospital in Midland.
2024 at 12:00 AM ESTBookmarkSaveMost African central banks weighing interest rates over the next three weeks are set to lower them
wary that the window to ease further may narrow after Donald Trump becomes US president
2024 – Memorial Physician Clinics welcomes Christopher S
Blomberg in the practice of Family Medicine at Memorial Physician Clinics Cedar Lake Family Practice
Blomberg received his Doctor of Medicine at the American University of the Caribbean School of Medicine
He completed his Family Medicine Residency at Louisiana State University Health Science Center
Blomberg is board-certified in Family Medicine
We have the address for the funeral home & the family on file
If you're not happy with your card we'll send a replacement or refund your money
Elizabeth "Liz" Cirone of Roselle created this Life Tributes page to make it easy to share your memories
— Michelle Blomberg never set out to be the CEO of a computer software company
but that's where she found herself 24 years ago
responsible for ushering a company into the 21st century
CEO of the Rochester-based AgVantage Software Inc.
now oversees a much larger company with customers across the country and a product line that covers every inch of the agricultural supply chain
the company has put in place a process called customer-driven development wherein AgVantage develops new software based on customers’ requests and feedback
Michelle was one of 10 children raised in the small farming community of Albert City
worked as an accountant for a hog refinery in town until one day at work a representative from IBM stopped by to recruit anyone working in the ag industry with a proclivity for math
where he took some computer programming classes
and then they sent him to Minneapolis for three weeks
he was a computer programmer for IBM," Blomberg said of her father
After writing computer software for some time at the local cooperative in the late '70s
started their own computer software company
Michelle listened to her father when he suggested she minor in computer science at Wayne State
where she would major in math and had scholarships to play basketball and softball
Her first job out of college was as a computer programmer in the Twin Cities
which made her realize she'd rather be teaching math
until her parents started to talk retirement and their interest in her taking over the family business
and moved back to her hometown to learn directly from them
"They had 15 or so co-ops that were running my dad's system that he had written
and my mom was his bookkeeper," Blomberg said of Be-Rich
The year was 1999 and the company belonged to her
Through the '80s and '90s he wrote all the largest computer programs using a two-digit code for the year
which was to pay about $500,000 to have the entire system converted
Blomberg said she had about $60,000 to her name at the time
"It needs to be four digits or it's going to fail," Blomberg said of her father's words
family-owned computer software company — Udenberg & Associates — had operated since 1976 in the city of Rochester
engineering and educational facilities took up a 397-acre site on the edge of the city
which would eventually partner with IBM to offer software and hardware
was started by Don Udenberg who wrote a software system to make business more efficient at grain elevators and other farm businesses
Udenberg also noticed the rise in consolidation in the ag industry in the '90s
and the company focused on handling complex mergers between multiple businesses
When Udenberg reached out to Blomberg to discuss a merger
and she was able to get the blessing from her parents
"It was basically the same story as my mom and dad's business
only a little bit bigger," Blomberg said of Udenberg's company
which had about 30 co-ops using its systems at the time
and Blomberg became CEO of the newly named AgVantage
which had about 40 clients who were soon to be part of a major pivot
even though the company at the time was roughly 25 years old
reinvent everything," Blomberg said of the early 2000s
"We had a lot of things that needed to get out of the old ways and get into some new technologies
and just really everything we needed to get a fresh start and move on."
The cost-saving process now known as cloud technology was put into place in the early 2000s by AgVantage
which could host companies on its IBM back end rather than businesses having to make the purchase and upkeep themselves
"They would need to invest maybe $50,000 in a machine that was going to be there for only three years," she said
"So we worked with IBM and carved out a space that could be protected for them
Blomberg said the company's business of hosting has gone "gangbusters," and nearly 80% of their customers run on the machine hosted by AgVantage
She said unlike most of the major cloud services or shared cloud spaces
they offer an actual sellable space where customers get peace of mind
then come in here and touch the machine," she said
At 48-years-old the company has grown to 53 employees and has tens of thousands of users relying on AgVantage software daily
The company remains a go-to for handling agribusiness mergers
including with Ohio-based Sunrise Cooperative
which has gone through three mergers in the last decade
said the company uses AgVantage Software for all its services
and not just to handle the onboarding of new employees and systems
"It’s the true backbone of our company when it comes down to software
If software won’t interface with AgVantage
As a computer software company that specializes in handling complex mergers
Blomberg said the company will never change hands as long a she's still at the helm
to allow for customers to trust that owners and the employees are one in the same
There are stock restrictions to keep the 100% employee ownership in place
and when an employee leaves the company or retires
they are required to sell the stock to current employees or back to the company
Blomberg said there used to be several computer software companies that were employee-owned
but in recent years they've been picked off by financial institutions
"This happens to me about 10 times a month," she said of offers from investment firms
We're interested in buying you.' And I just say we're not for sale
'Everybody has a price tag.' I really don't have a price tag."
When asked what the company's worth is today
Blomberg steers the question in another direction
"These last five years have just been really good for the company
with our customers and our employees have been super happy
and so it's what drives us financially to do well
win mergers like we're winning most of the time."
Both of Blomberg's parents have passed away
but before her dad died she would often visit him and his friends in Arizona
"That's the other reason I always say we're not for sale
because when my mom had already passed away
I'd go every year for a week and just hang out with him and his crony buddies down in Arizona," Blomberg said
"These old guys would sit around drinking coffee
I've just taken that so much to heart that I can't let it go."
2024 at 8:56 am ET.css-79elbk{position:relative;}An assistant fire chief in Newtown was struck and killed by a driver on Monday night
CT — An assistant fire chief in Newtown was struck and killed by a driver on Monday night
The Connecticut Fire Chiefs Association identified the victim as Peter Blomberg
Blomberg was hit as he crossed Church Hill Road near its intersection with Dayton Street
and Sandy Hook Fire Department were called to the scene of the accident at around 6:15 p.m.
Blomberg was taken to Danbury Hospital for injuries sustained in the accident and was pronounced dead at the hospital
Church Hill Road remained closed in the area of the accident until approximately 10:30 p.m
Officers on scene determined that a Toyota Tacoma pickup truck was traveling eastbound on Church Hill Road and struck Bloomberg crossing Church Hill Road towards the westbound shoulder on his way to attend a town commission meeting
The Tacoma driver remained on scene and has been cooperative with the investigation
We offer both our sincerest condolences and gratitude to the Blomberg family in this tragic time
There will be more to follow in the days ahead."
Get more local news delivered straight to your inbox. Sign up for free Patch newsletters and alerts.
(KMTV — This week's Zach at the Zoo brings us back to Omaha Henry Doorly Zoo & Aquarium's Amphibian Conservation Area to learn all about the Blomberg's Toad – one of the largest toads in the world
Today’s visit to the zoo brings us back off-exhibit to the Amphibian Conservation Area
Supervisor Derek Benson introduced us to the Puerto Rican crested toad and the Wyoming toad
“Still growing — just one year old — these are Blomberg's toads.”
"They’re one of the top 3 (largest) so they should get double this size
They will blow up to seem bigger than they are,” Benson said
“If you look at the muscle tone of their leg — we gave them this room to develop that
So we give them this space to jump and swim and climb,” Benson explained
Anything that we can put in front of them — crickets
We’re honestly trying to get bigger and bigger prey items
It’s folded in their mouth and then comes out to catch prey and goes back in.”
And the good news for you – eventually you’ll be able to see them for yourself
“These will be on exhibit here at the zoo in probably a year or so.”
These nocturnal toads will fit in perfectly in the Kingdoms of Night – inside the Desert Dome
The result extended Tottenham's unbeaten run at home in the Europa League to 14 matches and they will take a two-goal cushion to Aspmyra next week
where their opponents have achieved six wins from their seven home fixtures in this season's competition
Bodo/Glimt made history by becoming the first Norwegian team to play in the semi-finals of a UEFA senior men’s club competition
© 2025 Cricket
A fire official was struck and killed during a crash in Newtown on Monday night
The crash was reported around 6:15 p.m. on Route 6 in the area of Dayton Street where emergency crews responded on the report of an accident involving a vehicle and a pedestrian, according to the Newtown Police Department
The pedestrian was transported to Danbury Hospital for injuries suffered in the accident and was pronounced dead at the hospital
the crash involved a Toyota Tacoma pickup truck that was headed east on Route 6 when it struck a pedestrian crossing the road toward the westbound shoulder
The pedestrian involved was identified as Botsford Fire Rescue Assistant Chief Peter Blomberg, according to the Connecticut Fire Chiefs Association and Botsford Fire Rescue
Blomberg was crossing the street to attend a fire commission meeting when he was hit
Blomberg’s death comes on the heels of 66-year-old Robert Sharkevich Sr., a Wethersfield volunteer firefighter, being laid to rest on Monday during a service in Hartford that was attended by hundreds
Sharkevich was killed last week when the UTV he was riding in rolled over and injured three other firefighters as crews were responding to the wildfire on Lamentation Mountain in Berlin
the fire chiefs association described it as “unimaginable to comprehend” that Blomberg could be killed in a crash less than eight hours after Sharkevich was laid to rest
“Please keep the Blomberg family and the Newtown area emergency services in your thoughts,” the association wrote
“We ask for your thoughts and prayers and appreciate the outpouring of support thus far,” Botsford Fire Rescue wrote in a statement
“We offer both our sincerest condolences and gratitude to the Blomberg family in this tragic time.”
who represents Newtown and is ranking senator on the Transportation Committee
said Blomberg’s death is “devastating and tragic news.”
“Newtown and all of its volunteer fire stations and members mourns this horrible loss,” Hwang said in an email
“We pray for the Blomberg family and offer condolences to all who serve at Botsford Fire Rescue and all first responders who put their lives at risk to protect our community.”
©2024 Hartford Courant. Visit courant.com. Distributed by Tribune Content Agency, LLC.
Inside the push to change how every American’s nest egg is invested and make private assets mainstream.
During the meeting attended by Blackstone Inc., UBS Group AG, Neuberger Berman and others, participants assembled a manifesto articulating private equity’s rightful position in 401(k) plans, including in the default portfolios for workers who don’t select their own investments.
Volume 3 - 2021 | https://doi.org/10.3389/fhumd.2021.673104
This article is part of the Research TopicHuman and Artificial Collaboration for Medical Best PracticesView all 6 articles
According to a recent literature review (Bærøe et al., 2020)
Artificial Intelligence (AI) in healthcare is already being used: 1) in the assessment of the risk of disease onset and in estimating treatment success (before initiation); 2) in an attempt to manage or alleviate complications; 3) to assist with patient care during the active treatment or procedure phase; 4) in research aimed at elucidating the pathology or mechanism of and/or the ideal treatment for a disease
While there are some first uses of AI in healthcare, there is still a lack of many approved and validated products. Indeed, given that “the artificial intelligence industry is driven by strong economic and political interests,” the need for trustworthy adoption of AI in healthcare is crucial (Bærøe et al., 2020)
AI has the potential to “greatly improve the delivery of healthcare and other services that advance well-being, if it is validated by the authorities, accepted and supported by the Healthcare Professionals and Healthcare Organizations and trusted by patients” (MedTech Europe, 2019; Deloitte, 2020)
an AI to be trustworthy needs to be: lawful—respecting all applicable laws and regulations
robust—both from a technical and social perspective
and ethical—respecting ethical principles and values
The AI HLEG defines four ethical principles rooted on fundamental rights (AI HLEG, 2019): 1) respect for human autonomy
Based on these four principles, the AI HLEG sets out seven requirements for AI systems to be deemed trustworthy and which assist the process of self-assessment. Each requirement is described below (AI HLEG, 2019)3:
• Human agency and oversight: all potential impacts that AI systems may have on fundamental rights should be accounted for and that the human role in the decision-making process is protected
• Technical robustness and safety: AI systems should be secure and resilient in their operation in a way that minimizes potential harm
and fosters confidence in their reliability;
• Privacy and data governance: given the vast quantities of data processed by AI systems
this principle impresses the importance of protecting the privacy
and quality of the data and protects human rights of access to it;
• Transparency: AI systems need to be understandable at a human level so that decisions made through AI can be traced back to their underlying data
If a decision cannot be explained it cannot easily be justified;
and fairness: AI systems need to be inclusive and non-biased in their application
This is challenging when the data is not reflective of all the potential stakeholders of an AI system;
• Societal and environmental wellbeing: in acknowledging the potential power of AI systems
this principle emphasizes the need for wider social concerns
and individuals to be taken into account; and
seeks to ensure clear lines of responsibility and accountability for the outcomes of AI systems
and an environment in which concerns can be raised
and implementation of trustworthy AI depends on the domain and the context where the AI system is used
Although these requirements are a welcome first step toward enabling an assessment of the societal implication of the use of AI systems
there are some challenges in the practical application of requirements
- The AI HLEG trustworthy AI guidelines are not contextualized by the domain they are involved in
The meaning of some of the seven requirements is not anchored to the context (e.g.
- They mainly offer a static checklist (AI HLEG, 2020) and do not take into account changes of the AI over time
- They do not distinguish different applicability of the AI HLEG trustworthy AI guidelines (e.g., during design vs. after production) as well as different stages of algorithmic development, starting from business and use-case development, design phase, training data procurement, building, testing, deployment, and monitoring (Morley et al., 2019)
- There are not available best practices to show how to implement such requirements and apply them in practice
- The AI HLEG trustworthy AI guidelines do not explicitly address the lawful part of the assessment
To help overcome some of these shortcomings
we created a holistic process to guide a trustworthy AI assessment
We present a case study to illustrate how it applies to a specific healthcare context
The paper’s main contribution is to demonstrate how to use the general AI HLEG trustworthy AI guidelines in practice for the domain of healthcare
we present a best practice of assessing the use of machine learning (ML) as a supportive tool to recognize cardiac arrest in emergency calls
The AI system under investigation has been used in the city of Copenhagen in Denmark since Fall 2020
We use a process to assess trustworthy AI in practice, called Z-Inspection® (Zicari et al., 2021)
which expands upon the “Framework for Trustworthy AI” as defined by the AI HLEG
The Z-Inspection® is a holistic process based on the method of evaluating new technologies according to which ethical issues must be discussed through the elaboration of socio-technical scenarios. Echoing work in pragmatist ethics (Keulartz et al., 2002; Lucivero, 2016)
this methodology makes it possible to implement the principles and requirements as defined in the AI HLEG trustworthy AI guidelines
while ensuring a satisfactory consideration of the specific issues of the cases studied
Socio-technical scenarios prove to be a particularly effective means of eliciting the reflections necessary to achieve the aims of the Z-Inspection®
The Z-Inspection® process, in a nutshell, is depicted in Figure 1
and it is composed of three main phases: 1) the Set Up Phase
FIGURE 1. The Z-Inspection® process in a nutshell [with permission from (Zicari et al., 2021)]
Our approach is inspired by both theory and practice (“learning by doing”)
electrical signals of the heart) of patients to predict the risk of cardiovascular heart disease
The detailed process is described in (Zicari et al., 2021)
Here we recall some of the key elements of the process
The Z-Inspection® is a general process to assess trustworthy AI in practice that can be used for a variety of domains where an AI system is under development and/or deployed
as it is pertinent to the use case that we are reporting in this paper
The Set-Up phase starts by verifying that no conflict of interest exists
between independent experts and the primary stakeholders of the use case
This phase continues by creating a multi-disciplinary assessment team composed of a diverse range of experts
healthcare domain experts (such as cardiologists
cardiovascular and public health researchers)
This is one of the most important aspects of our approach to ensure that a variety of viewpoints are expressed when assessing the trustworthiness of an AI system
The set-up phase also includes the definition of the boundaries of the assessment
taking into account that we do not assess the AI system in isolation but rather consider the social-technical interconnection with the ecosystem(s) where the AI is developed and/or deployed
The Assess Phase is composed of four tasks:
The creation and analysis of Socio-Technical Scenarios for the AI system under assessment
and legal “issues” is identified and described using an open vocabulary
such “issues” are then mapped to some of the four ethical principles and the seven requirements defined in the EU framework for trustworthy AI
Execution of verification of claims is performed
A number of iterations of the four tasks may be necessary in order to arrive to a final consolidated rubrics of issues mapped into the trustworthy AI framework
we report the results of the various tasks of the Z-Inspection® process applied to the specific use case presented below
This paper is a first reflection of what we are learning by assessing this case
The final results of the assessment will be published in a follow up paper
The problem: Health-related emergency calls (112) are part of the Emergency Medical Dispatch Center (EMS) of the City of Copenhagen, triaged by medical dispatchers (i.e., medically trained dispatchers who answer the call, e.g., nurses and paramedics) and medical control by a physician on-site (Lippert, 2018)
In the last years, the Emergency Medical Dispatch Center of the City of Copenhagen has failed to identify approximately 25% of cases of out-of-hospital cardiac arrest (OHCA), the last quarter has only been recognized once the paramedics/ambulance arrives at the scene (Viereck et al., 2017; Blomberg et al., 2019; Drennan et al., 2021)
the Emergency Medical Dispatch Center of the City of Copenhagen loses the opportunity to provide the caller with instructions for cardiopulmonary resuscitation (CPR)
OHCA is a life-threatening condition that needs to be recognized rapidly by dispatchers
and recognition of OHCA by either a bystander or a dispatcher in the emergency medical dispatch center is a prerequisite for initiation of CPR
A Cardiopulmonary Resuscitation (CPR) as defined by (Perkins et al., 2015) consists of compressions on the upper body to mechanically keep the blood flowing after the heart has stopped beating
Previous research has identified barriers to the recognition of OHCA (Sasson et al., 2010; Møller et al., 2016; Viereck et al., 2017). Improving early recognition is a goal for both the American Heart Association and the Global Resuscitation Alliance (Callaway et al., 2015; Eisenberg et al., 2018; Nadarajan et al., 2018)
The AI solution: A team lead by Stig Nikolaj Blomberg (Emergency Medical Services Copenhagen
Denmark) worked together with a start-up company and examined whether a machine learning (ML) framework could be used to recognize out-of-hospital cardiac arrest (OHCA) by listening to the calls made to the Emergency Medical Dispatch Center of the City of Copenhagen
The company designed and implemented the AI system and trained and tested it by using the archive of audio files of emergency calls provided by Emergency Medical Services Copenhagen in the year 2014
The prime aim of this AI system is to assist medical dispatchers when answering 112 emergency calls to help them to early detect OHCA during the calls
Status: The AI system was put into production during Fall 2020
The research questions: Is the AI system trustworthy
This is a self-assessment conducted jointly by a team of independent experts together with the prime stakeholder of this use case
The main motivation of this work is to study if the rate of lives saved could be increased by using AI
and at the same time to identify how trustworthy is the use of the AI system assessed here
and to provide recommendations to key stakeholders
To perform the assessment, an initial team of interdisciplinary4 experts was formed. The composition of the team is a dynamic process and the choice of the experts, their skills, background, and roles have a significant ethical implication for the overall process. In our opinion, one cornerstone of being able to conduct an independent AI ethical assessment is the absence of conflict of interests,5 both direct and indirect
If conflicts of interests are revealed in the course of the inspection
they are recorded and shared with whatever entities solicited the inspection in the interest of transparency and integrity
we defined the boundaries and the context of the assessment
an AI system is never analyzed in isolation but always taking into account what we call the “ecosystems.”
We define an ecosystem, as applied to our work, as a set of sectors and parts of society, level of social organization, and stakeholders within a political and economic context where the AI is playing a role (Whittlestone et al., 2019)
The Assess Phase is composed of four tasks: I
The analysis of the usage of the AI system
The identification of possible ethical issues
Mapping of such issues to the trustworthy AI ethical values and requirements
The basic idea is 1) to identify a list of ethical and other issues (called flags) which require inspection
then 2) map them to some or all of the seven requirements for trustworthy AI
3) create a plan of investigation that will provide feedback to re-evaluate the initial list of ethical issues and flags to produce a consolidated list
We can visualize this part of the process as follows: the first part
1) leaves space for the experts to explore and identify possible “issues” using an open vocabulary
They describe ethical issues and flags with their own words and bring in their own expertize and different background and viewpoints
forces the discussion to reach a consensus by using a closed vocabulary
the four ethical principles and the seven requirements for trustworthy AI
3) depends on the status of the assessment
or as a tool to support the design of the AI system
This phase was undertaken by a general group of 30 participants proficient in technical and theoretical computer science
and medical expertize specific to the particular use case
reflecting the iterative structure of the Assess Phase
were envisioned via a tripartite structure: the first for the primary stakeholders of the original use case to motivate and present their work
the second for Z-Inspection® participants to ask substantive and critical questions of the primary stakeholders
and the third for participants to map these questions to the ethical categories in the EU’s Guidelines for Trustworthy AI
the general group splintered into more specialized subgroups to continue the Z-inspection
We plan to publish the results of task IV in a forthcoming paper
The Assess Phase of the process begins with the analysis of socio-technical scenarios
In order to answer the above research questions
we created scenarios of use for this AI system and discussed them in several workshops with the experts together with the prime stakeholder
We report the essential parts in this section
The basic idea is to analyze the AI system using socio-technical scenarios with relevant stakeholders, including domain, technical, legal, and ethics experts (Leikas et al., 2019)
we decided not to include the vendor company who designed and implemented the AI system in the analysis
Socio-technical scenarios or usage scenarios are a useful tool to describe the aim of the system, the actors, their expectations, the goals of actors’ actions, the technology, and the context (Leikas et al., 2019)
Socio-technical scenarios can also be used to broaden stakeholder understanding of one’s own role in understanding technology
as well as awareness of stakeholder interdependence
Scenarios can be used as a part of the assessment of an AI system already deployed (as in this case)
or as a participatory design tool if the AI is in the design phase
Our team of experts used socio-technical scenarios to be able to identify a list of potential ethical and
technical and legal issues that needed to be further deliberated
where expert groups worked together to systematically examine
and elaborate the various tasks with respect to different contexts of the AI
We then distributed the work to smaller working groups to continue the analysis
We present in the rest of this section a summary of the socio-technical scenarios that we have created for this use case
We started by analyzing the prime aim of this AI system
namely to assist medical dispatchers (also referred to as call takers) when answering 112 emergency calls to help them to early detect OHCA during the calls
and increase the potential for saving lives
The system has been implemented because OHCA can be difficult for call takers to identify
along with limited attention spans in calls
For OHCA, a specific problem (compared with other 112 calls) is that the caller is never the patient—as they are unresponsive at that time of the call (Safar, 1988)—but a bystander (i.e.
we identified three classes of actors: primary
We define primary actors as stakeholders in direct contact with the applied system
The primary actors are Stig Nikolaj and his team (who specified the requirements for the design of the AI system and supplied the training and test data) are the prime stakeholder of the use case; the patients; the patients’ family members
the callers/bystanders; paramedics and the medically trained dispatchers who answer the call
Secondary actors are stakeholders responsible for developing and implementing the system but not using it directly
independent from the owner of the case who designed
The CEO of the Emergency Medical Services who gave permission to put the system into deployment
Tertiary actors are part of the overall ecosystem where the AI system is used
The tertiary actors are the Copenhagen Emergency Medical Services (EMS), which is an integrated part of the Health Care System for the Capital Region of Denmark, consisting of one hospital trust with six university hospitals in nine locations and one emergency medical service (Lippert, 2018)
The actors listed above share one common goal: saving the patient’s life
the actors have some distinct expectations and motivations:
- Caller/bystander: receive easy to understand and follow instructions to help patient;
- Dispatcher/call taker: provide targeted support and instructions to caller based on correct information;
- Paramedics: receive correct information to be well prepared upon arrival to care for the patient;
- Patients’ family members: know that everything possible was done to save the patient’s life and that no error occurred in the process (human or machine); if the patient dies
they may look for someone to hold responsible (the dispatcher/paramedic/AI system?);
avoid malfunctioning of the system leading to poor performance (e.g.
- Hospital system: improve efficiency and efficacy (i.e.
- Public Health System in Denmark: improve efficiency and efficacy (i.e.
number of lives saved due to the AI system)
The system was introduced to the call takers by the primary investigator of research (i.e.
each of them consisting of an hour training session
the AI system was presented as well as the objectives of the research and the protocol the dispatchers should follow in case of an alert
There was a one-month pilot testing where none of the alerts were randomized
This was performed to allow most of the dispatchers to experience an alert prior to the randomization start
the primary investigator was present at the Emergency Medical Dispatch Center of the City of Copenhagen and available for dispatchers to address questions
It is important at this point to review and create an evidence base that we will use to verify/support any claims made by the producer of the AI system and other relevant stakeholders
we summarize here the most relevant findings
Detecting OHCA is the “king quality indicator” across medical services in Europe and the rest of the world (Wnent et al., 2015)
One reason for this is that cardiac arrest is the most time critical incident
which an emergency medical service can respond to
If the emergency service performs substandard to these incidents
it is highly likely that the patient will respond at this point
Survivors of OHCA may sustain brain injury due to inadequate cerebral perfusion during cardiac arrest. Anoxic brain damage after OHCA may result in a need for constant care or assistance with activities of daily living. Persons with anoxic brain damage may therefore require nursing home care after discharge (Middelkamp et al., 2007; Moulaert et al., 2009)
We look now at the context and process where the AI system is used
including the interactions of actors with each other and with the ML
Figure 2 depicts a hypothetical case of a call where an actual cardiac arrest is occurring: The patient is suffering a cardiac arrest and is therefore lifeless
spouse of the patient) calls the 112 emergency-line and he/she is connected to a dispatcher
The dispatcher is responsible for asking relevant questions to the caller; the ML system is listening in on the call but currently does not provide any questions to the caller or the dispatcher
it shows an “alert” to the dispatcher
who is then free to act upon this alert or ignore it
FIGURE 2. Ideal case of Interaction between Bystander, Dispatcher, and the ML System. (with permission from Blomberg et al., 2019)
If the dispatcher agrees with the system in the detection of a cardiac arrest
they instruct the caller to administer CPR to the patient (very time-sensitive) and dispatch an ambulance
They should then stay on the call until the ambulance arrives
The prime stakeholder commissioned an external start-up company to implement the AI system because they discovered that off-the-shelf solutions did not work
no Danish language model was readily available
For this use case, the ML system was designed and implemented with the expectation to detect cardiac arrest in calls faster and more reliably than human operators. An initial confirmation of this assumption was reported in a retrospective study conducted by the prime stakeholders (Blomberg et al., 2019)
They used a language model for translating the audio to text based on a convolutional deep neural network (LeCun et al., 1989)
The ML model was trained and tested on datasets of audio files of calls to the 112 emergency line made in 2014
provided by the prime stakeholder to the company
so other personal data was explicitly not used
The text output of the language model was then fed to a classifier that predicted whether a cardiac arrest was happening or not (Figure 3). The AI system was applied directly on the audio stream where the only processing made was a short-term Fourier transformation (Havtorn et al., 2020)
hence no explicit feature selection was made
The predictive model, working only on the text output of the automatic speech recognition model, was predicted based on the raw textual output. When an emergency call was analyzed in real-time by the ML framework, the audio file was processed without any prior editing or transcription and transformed to a textual representation of the call, which was then analyzed and outputted as a prediction of cardiac arrest (Blomberg et al., 2021)
Using a Danish language model means that calls in other languages were interpreted in a way that the cardiac arrest model could not work with (i.e.
trying to understand Danish words from English speech)
there is no explanation why some calls were seemingly not understood
There is no explanation of how the ML makes its predictions. The company that developed the AI system has some of their work in the open domain (Maaløe et al., 2019; Havtorn et al., 2020)
the exact details on the ML system used for this use case are not publicly available
The general principles used for this AI system are documented in the study by (Havtorn et al., 2020)
The paper describes the AI model implemented for this use case
the paper presents the model trained using different data sets and therefore the results are not representative for this use case
The details of the implementation of the AI system for this case are proprietary
Our expert team was informed by the prime stakeholder that the AI system does not have a CE-certification as a medical device
The assumption made by the designers of the AI system was that there are some patterns in the conversations that the AI system can detect and use to alert the call takers quicker than a human dispatcher
from the words chosen and from the tone of a bystander
The AI system analyses the conversation between the caller and the dispatcher
if the AI system is suspecting a cardiac arrest
the system is only used for detecting cardiac arrest and does not propose questions to the dispatcher
based on the dispatcher’s previous conversations
In a previous implementation of the AI system
background noises caused the AI system to generate many false positives (which would allocate resources to the wrong patients and thereby delaying treatment for others that are in greater need)
listening for agonal breathing has resulted in many false positives
Agonal breathing (short, labored, gasping breaths that occur because oxygen cannot reach the brain) (Roppolo et al., 2009) is defined as “an abnormal breathing pattern originating from lower brainstem neurons and characterized by labored breaths, gasping, and, often, myoclonus and grunting.” (NCBI, 2021)
the dispatcher asked the caller to put the phone to the patient’s mouth to listen for breathing
and in the experience of the key stakeholders rarely produces any results
While agonal breathing is highly predictive of OHCA
it can be perceived by a layperson as normal breathing
leading to the misunderstanding that the patient is alive and therefore not having OHCA
one key design decision that the prime stakeholder took together with the software developers of the vendor company was to censor the ML model to disregard background noises in order to avoid too many false positives
even though some noises gave a good indication of a current cardiac arrest
the AI model therefore only listens to the words spoken by the caller and the dispatcher
The AI is converting the audio files into text files representing words
but the model is more complicated than just words
It is looking for patterns in questions and answers
if the caller replies yes to a question of unconsciousness
then the probability of cardiac arrest goes up
If both are positive—patient unconscious and blue lips –
as described in an interview with our expert team by the prime stakeholder
The medical dispatchers were involved in designing the alert that the system shows if it has detected signals indicating cardiac arrest
The dispatchers were consulted during several workshops conducted by the prime stakeholder
Callers and patients have not been involved in the system design with the reasoning that patients are clinically dead and callers are not concerned with how the system presents itself to the dispatcher
Design choice depends on the perspective used by the prime stakeholders
There was a conscious key choice during system design to focus on high sensitivity over high specificity
as the prime stakeholder considered potential harm by a false negative much higher than the potential harm of a false positive
there was a trade-off as to not create too many false positives that undermine the credibility of the system and also waste of resources
with the unintended consequence that if there are not enough resources
then other patients can be harmed due to the false-positive result
The primary stakeholders performed two studies: 1) A retrospective study performed before they deployed the AI system in production (Blomberg et al., 2019); and later on, 2) a randomized clinical trial (Blomberg et al., 2021) whose results were published after the AI system was already in production
there were no changes to architecture or retraining between the studies
the authors examined whether the ML system could recognize OHCA by analyzing the audio files of 108,607 emergency calls made in 2014 to the emergency medical dispatcher center
The performance of the ML system was compared to the actual recognition and time-to-recognition of cardiac arrest by medical dispatchers
Out of 108,607 emergency calls, 0.8% (918) of the calls were OHCA calls eligible for analysis. Compared with medical dispatchers, the ML system had a lower positive predictive value than dispatchers (20.9 vs. 33.0%, p < 0.0001). Time-to-recognition was shorter for the ML system compared to the dispatchers (median 44 vs. 54 s, p < 0.001) (Blomberg et al., 2019)
the ML system was only slightly faster than the human
it was minutes faster than the dispatcher and well within the 1 min detection limit
making a huge practical difference in those cases
The AI model for this use case was found to be more sensitive than the dispatcher but less specific. There were also cases where the model missed cardiac arrest, while the dispatcher did not (Blomberg et al., 2019)
as the system was only trained on Danish data
but the dispatchers understand more languages (i.e.
False negatives were identified with the help of the Danish Cardiac Arrest Register (Dansk Hjertestopregister, 2020)
The register collects all emergency cases where either a bystander or ambulance personnel are applying CPR or defibrillation
The data is collected by ambulance personnel
In the randomized clinical trial of 5242 emergency calls
the ML model was listening to calls and could alert the medical dispatchers in cases of suspected cardiac arrest
it was possible to check how fast the machine could make a prediction compared to the human dispatchers
The dispatchers were instructed how to interact with the system: if they saw the alert symbol
they were instructed to repeat the questions for patient consciousness and if the patient was breathing
an extensive study of calls was done in relation to the retrospective study to compare if the dispatcher was trying to persuade the caller to perform CPR in these cases
The patient was reported as being alive at the beginning of the conversation
Sometimes the machine predicted cardiac arrest while the patient was not suffering any symptoms
but when the ambulance arrived at the call site
the humans and the AI system missed the same patients
During the interview with the prime stakeholders
we were told that benchmarking dispatchers was not done as this was not part of the task
but also due to not wanting to jeopardize the cooperation between the researchers and the medical dispatchers
While it might be possible to use anonymized data
with 65 dispatchers and ∼1,000 cardiac arrests per year
Finding reasons on why some dispatchers might perform worse than others was also not a goal of this trial
Real time performance on actual emergency calls is comparable to the one reported in the retrospective study (Blomberg et al., 2021)
The result of this clinical trial was that “there was no significant improvement in recognition of out-of-hospital cardiac arrest during calls on which the model alerted dispatchers vs. those on which it did not; however, the machine learning model had higher sensitivity that dispatchers alone” (Blomberg et al., 2021)
The authors concluded that “these findings suggest that while a machine learning model recognized a significantly greater number of out-of-hospital cardiac arrests than dispatchers alone, this did not translate into improved cardiac arrest recognition by dispatchers” (Blomberg et al., 2021)
the AI model) is proprietary to the company that implemented it
but the data used for training and testing the ML model belongs to the prime stakeholder of the use case
and the prime stakeholder must comply with its requirements
We do not assess here whether the AI system complies with the law
In this step of the Assess Phase, we identified possible ethical and technical and legal issues for the use of the AI within the given boundaries and context (see list of actors above). For some ethical issues, a tension may occur. We use the definition of tension from Whittlestone et al. (2019)
which refers to different ways in which values can be in conflict—i.e.
tensions between the pursuit of different values in technological applications rather than an abstract tension between the values themselves
The scenarios representing different usage situations of the system were discussed with a number of experts and when necessary other stakeholders
The experts examined phase by phase according to the trustworthy ethical values requirements in order to define potential ethical issues and cross-checked against each phase of the scenario to identify possible ethical issues arising
described them and reported at each step and documented
we left space for the experts to explore and identify possible “issues” using an open vocabulary
They described the ethical issues and what we call “flags” in their own words
bringing their own expertize and different background and view points
that could refer to technical and ethical and/or any combination of
The process used to reach consensus is made transparent so that it is possible to go back and re-assess possible relevant changes in the ecosystems
To describe and classify the ethical issues and flags
identify ethical tensions (if any) and describe them
a selected number of experts in our team were asked
Such a variety of backgrounds is necessary to identify all aspects of the ethical implications of using AI
While the interdisciplinary nature of the team is essential
it can pose a challenge on how to reach a consensus among the various experts
By analyzing the scenario of usage for this use case
the following initial preliminary issues were observed and described
the issues are described using an open vocabulary
A selection of the initial findings is presented in the rest of this section
which did not show any benefits in using the AI system in practice
We were told in an interview with the prime stakeholder that patients were not study participants in this randomized control trial
The rationale that was given to us is that if they should have studied patient outcomes
the trial should have either be a multicenter study or continue for approximate eight years
For our assessment, it is important to find out whether and how the ML system influences the interaction between the human actors, i.e., how it influences the conversation between the caller/bystander and the dispatcher, the duration of the call, and the outcome, and why during the clinical trial the use of the AI system did not translate into improved cardiac arrest recognition by dispatchers (Blomberg et al., 2021)
Some possible hypotheses that need to be verified are listed in the following
The dispatcher possibly did not trust the cardiac arrest alert
It might depend on how the system was introduced—how the well-known cognitive biases were presented/labeled—if the use of the system was labeled as a learning opportunity for the dispatcher
that would disclose the incompetence of the dispatcher
it would be desirable to look for potential patterns of
cognitive bias in the dispatchers and provide specific feedback to the dispatcher
Another hypothesis is that the case aims for performance (or accuracy)
the trust might be increased by how the system is presented if the people who implement it confer the idea of a “growth mindset” that dispatchers could identify with
that might really improve uptake and trust
But it could be that dispatchers did not sufficiently pay attention to the output of the machine
It relates to the principle of human agency and oversight in trustworthy AI mentioned in the rest of this section
There seems to be a tension concerning the role of dispatchers in designing the algorithmic output
Perhaps certain sounds should also be used to ensure that the dispatcher perceived the urgency of the algorithmic output
One additional idea is that the look and functionality of the alert does not perform as it should
perhaps because the dispatchers have been part of designing it themselves
What makes them knowledgeable about how to get them to react in the way desired
Perhaps they are biased against a design that would make them feel the pressure to follow the machine
If one of the reasons why dispatchers are not following the system to the desired degree is that they find the AI system to have too many false positives
then this issue relates to the challenge of achieving a satisfactory interaction outcome between dispatchers and system
Another tension concerns whether dispatchers should be allowed to overrule a positive prediction made by the system and not just merely overrule a negative prediction by the system
what exactly is the right interplay or form of interaction between system and human
given the goals of using the system and the documented performance of human and system
One of the biggest risks for this use case is where a correct dispatcher would be overruled by an incorrect AI system
The AI system does not predict “no cardiac arrest,” but only positive predictions are shown
but the dispatcher would not necessarily be affirmed
the dispatcher’s actions might trick the machine into believing it is a cardiac arrest
as the conversation might take a turn and start sounding like cardiac arrest to the machine
We could not find a justification for choosing a certain balance between sensitivity and specificity
CPR is started on people who do not need it and administered CPR over a longer period of time can lead to rib cage fractures
it is unlikely that CPR would be performed on a conscious patient for a longer time
as the patient probably would fight back against it
This results in no CPR being administered and the patient remains dead
In this context “too low” is when the AI system performs poorer than the dispatchers
The AI system is evaluated against human performance
as this system is only useful if it can assist humans; otherwise
The idea that it is a serious defect if the machine does not confirm a correct positive call by a dispatcher points to an ethical tension concerning the machine-dispatcher interaction
While it seems to be a great harm if a dispatcher did not follow her judgment due to a lack of confirmation from the machine
it should also be considered whether this is any worse than having a dispatcher wrongly ignoring a true positive call by a machine
From the point of view of the person suffering a cardiac arrest
the harm of either mistake would be the same
it can be expected that allowing dispatchers to ignore positive calls from the machine will result in more deaths overall as compared to making it compulsive for dispatchers to follow the machine’s advice
there is a tension between allowing dispatchers to ignore machine advice
perhaps to maintain full human control and responsibility for the decision-making
and saving all the lives that one could save by making dispatchers obliged to follow the advice of the machine
A number of questions were raised during the analysis of the use of the AI system
there is no structured way for feedback from the ambulance medics to the dispatchers
We noted that there is no learning possibility in the current system—compared with other contexts such as aviation security, where “individuals’ attitudes (as opposed to personalities) are relatively malleable to training interventions and predict performance” (Sexton et al., 2000)
it is important to verify: Is it possible that by improving the set of questions
it will also be possible to improve the ML classifier
This question would ask for biological descriptors—such as does he look pale
It would make sense for the dispatcher to ask questions that are tailored to aid the ML classifier to reduce the risk of false alerts/non-alerts
An additional serious challenge is that AI is based only on conversations and language with all connected risks of emotional language miscomprehension of dialect or not a native speaker
Our team of experts did not sign a Non Disclosure Agreement (NDA) with the vendor company
and that means that the AI system is considered a “black box,” with no details of the implementation of the AI algorithms and the AI model
no direct communication between our team of experts and the vendor company was (and is) taking place
The prime stakeholder cooperates with the vendor company
and they have declared no conflict of interest with them
The main issue here is that it is not apparent to the dispatchers how the AI system comes to its conclusions
It is not transparent to the dispatcher whether it is advisable to follow the system or not
it is not transparent to the caller that an AI system is used in the process
If transparency, at least in part, concerns the ability to understand why and how the AI system comes to produce a certain output given a certain input, then transparency about the values that have guided and justified the trade-offs would seem relevant. There is increasing awareness of the many ways in which the design of an AI system involves value-based judgments (Biddle, 2020)
Examples of this type of judgment include when designers of the system decide how to balance the costs of false positives and false negatives
but also trade-offs between accuracy and explainability
and between different formal and potentially conflicting definitions of algorithmic fairness
such as equality of error rates and equality of predictive value across socially salient groups would ideally be explicated
It was reported in one of the workshops that if the caller was not with the patient
such as in another room or in a car on their way to the patient
The same was found for people not speaking Danish or with a heavy dialect
concepts such as “bias” and “fairness” are domain-specific and should be considered at various levels of abstractions (e.g.
from the viewpoint of the healthcare actors down to the level of the ML model)
We look at possible bias in the use of the AI system
The AI system was only trained on Danish data
but the callers spoke more languages (i.e.
as the system brings disadvantages for some groups
A serious challenge is that AI is based only on conversations and language with all connected risks of emotional language miscomprehension of dialect or non-native speakers
There is a risk that the AI system does not work equally well with all ethnic groups
It works best with Danish-speaking callers
It actually has a lower degree of being able to handle caller diversity than the dispatchers
ethnic minorities would be discriminated against
When we looked at the data used to train the ML model
we observed that the dataset used to train the ML system was created by collecting data from the Copenhagen Emergency Medical Services from 2014
The AI system was tested with data from calls between September 1
It appears to be biased toward older males
We suspect this methodology to present risks of unwanted discrimination against minorities and under-represented races and sex
Predictions for individuals outside of training distributions would likely be less accurate
and dispatchers would misuse this information
older males are the most frequent “consumers” of health care when it comes to cardiac arrest
we might need to acknowledge why such a bias emerged in the first place
the calls that were used in training the ML were with older males
In general, AI encodes the same biases present in society, whether through algorithmic design or biased data sets (Owens and Walker, 2020)
a problem is the responsibility and liability of the dispatcher
What are the possible legal liability implications for ignoring an alert coming from a ML system
The consequences of refuse or acceptance of an alert are central
There is a need of justification of choice: in this field
the risk of de-skilling is possible (technological delegation also in order not to be considered reliable for ignoring/refusing it); we also need to think about the cultural level of a dispatcher and the ethical awareness of the consequences of they choice: how could they decide against the machine
Sometimes it could be easier to accept than to ignore/refuse for many reasons
In the randomized clinical trial (Blomberg et al., 2021)
it was reported that less than one in five alerts were true positives
Such low sensitivity might lead to alert fatigue
the dispatcher is always ultimately liable
The alert fatigue is important and needs to be investigated because one wants to make sure that the AI fits neatly in the medical workflow and actually improves patient outcomes
If it turns out that the dispatcher is not following it because of alert fatigue
this would be a problem (also likely from a liability perspective)
A follow-up question would be what the interaction between the human and the AI system should be
It may be (depending on data of human factors testing in the real world) that a fully autonomous AI will be safer than having too many other human decisions involved (that said
it may be that in this particular situation
there shouldn’t be a discretion not to follow an alert; of course under the condition that the AI is highly accurate)
Does the dispatcher need to know how the ML works and the ways it can make mistakes
Ignoring the alert is a feasible option—if the dispatcher can produce a good reason for ignoring it
the dispatchers ought to be educated in the inner working of a ML-model
whether it is realistic to assume that the dispatcher will actually ignore the system
The requirement for human agency and oversight seeks to ensure that AI is used to put people in a position to make more informed decisions, not necessarily to make the decisions for them. It specifically recognizes a “right not to be subject to a decision based solely on automated processing when this […] significantly affects them” (AI HLEG, 2019
For this case, the issue of the dispatcher having to distinguish if the alert is valid or not is a major challenge. Support is needed to extract important signals during the call that can be difficult for a human to discern on their own. However, this use case also surfaced many other issues with an AI support system. Is it possible for those who are impacted by the decisions made by AI to challenge them? Is there sufficient human oversight (Hickman and Petrin, 2020)
It seems that the dispatchers’ agency and autonomous decision-making are reduced by the system
The assumption is that the dispatchers do not primarily rely on their own decision-making but take the system into consideration
they do not know what criteria the system uses for its suggestions/advice/decisions
This is a case of agency in AI where agents and those being impacted (i.e.
the wider public) should be better informed about what criteria are used to determine the AI output
the AI acts as an educator to improve the dispatcher’s abilities
this is supposed to be a shared control system
in which part of the responsibility is conferred to the system
without the dispatchers knowing details about the decision-making criteria or the reliability of the system
The inclusion of the ML system clearly decreases the dispatchers’ autonomy
This may reduce the dispatchers’ engagement in the process and diminish their sense of agency
What is the basis for dispatchers to decide whether to follow the system’s suggestion or not
It could be useful to build a heuristic tool that informs the dispatcher when and when not to rely on the system
to put a disclaimer in place in certain situations (see above)
the question needs to be addressed of how a balance between ML system and dispatcher input in the shared decision-making process can be achieved
Is the process controlled by the advice given by some supervisor to dispatchers of whether or not they are supposed to follow the system’s output
dispatchers could decide not to follow the system’s advice
Specific concerns for this use case include the caller’s lack of awareness (and therefore consent) that an AI system is included in the process and that the call is used for analysis and research
The goal of the GDPR is the protection of fundamental rights and freedoms of natural persons (Art
These are determined in accordance with the Charter of Fundamental Rights of the European Union and the European Convention on Human Rights
This also includes the right to non-discrimination pursuant to Article 21 Charter of Fundamental Rights
as the system had more false negatives for people not speaking Danish or with a heavy dialect
the prime stakeholder of the use case is in charge of fulfilling the legal requirements
it would be desirable if the developers of the system would also be responsible as they implemented the AI system
But the responsibility of the vendors or developers of a system is not a requirement of the GDPR
For this use case, the AI HLEG trustworthy AI guidelines require “that mechanisms be put in place to ensure responsibility and accountability for AI systems” and emphasizes the importance of redress when unjust adverse impact occurs (AI HLEG, 2019
particularly those with life or death consequences
the potential harm can be substantial both in non-monetary and in monetary terms
Mechanisms that allow for redress in case of the occurrence of any harm or adverse impact are therefore particularly important
in the form that the AI HLEG trustworthy AI guidelines address them
accountability in this sense may refer to auditability that enables affected parties to assess algorithms
data and design processes; minimization and reporting of negative impacts; consideration and balancing of trade-offs when there are tensions between various ethical principles; and redress other than liability in the technical sense
there will almost inevitably be certain interactions between non-legal “ethical” principles and legal principles
and it is difficult to completely separate the two
the use case discussed herein raises important medical liability questions (some of which have already been alluded to above)
Due to the diffusion of responsibility that is typical for AI technology
the operation of such mechanisms is more complex than in usual medical liability cases
different actors (such as the institution using the AI
or those in charge of oversight of the AI) could potentially be responsible for the harm
It is therefore very difficult for any injured person to prove specific causation contributions or to show that an AI system was “defective”
such proof would require knowledge of the details of the AI algorithms’ and the AI models’ implementation—which are
proprietary of the company who implemented them (on intellectual property issues
Accordingly, it is also difficult to put mechanisms in place to provide information to (end-)users and third parties about opportunities for redress, as required by the AI HLEG trustworthy AI guidelines (AI HLEG, 2019
nothing more than general guidelines can be disclosed to these parties
and using the AI system should consider how—in line with the AI HLEG trustworthy AI guidelines—they can enhance the accountability factors mentioned above
This could include facilitating audit processes
if appropriate via evaluation by internal and external auditors
and creating avenues for redress apart from the pre-existing legal avenues available to those negatively affected by AI
such as additional costs that could arise from an increase in false positives by the AI system
resulting in unnecessary call taker assisted CPRs
and dispatching ambulances when they are not necessary
Our group faced a counterintuitive problem as we began analyzing ethical issues
It was not difficult in locating issues and responding to them
The diversity of our group members opened so many angles and subjects of interest that our project was threatened by too much success: left to our own devices
To convert our work from theoretical discussion into practical and applicable results, we took two steps. First, we limited the set of ethical principles and approaches that we would employ. Concretely, we opted for the EC Ethics Guidelines for Trustworthy AI because it is a widely recognized set of principles. We also selected the list of frequent AI ethical tensions cataloged by the Nuffield Foundation (Whittlestone et al., 2019)
because they are so well explained and accessible
we forced consensus by having each participant commit their personal thoughts to a short rubric
It required that each ethical dilemma and tension be narrated in our own individual words and then mapped onto the Ethics Guidelines
We found that the structured approach helped funnel our thinking into a single
coherent set of results that we could apply to the case
One drawback of this modular approach is that it does sacrifice some ethical nuance
the benefit of a common ethical language and structure for thought is that a sizable group of experts from diverse backgrounds can efficiently work toward a single and useful set of results
we distributed the work into four independent Working Groups (WG)
created according to the skills and expertize of the various team members
Distinct subgroups adopted different strategies for arriving at an internal consensus
in order to be mindful of relevant cognitive biases for different modes of expertize
The expectation was that distinct paths to consensus may be more or less suited to providing external validity checks on the judgments of particular Z-Inspection® participants
the ethics and law subgroup first selected two volunteers to lead their internal discussions
The wider group of four participants then held 2–3 separate calls to go through the ethical issues already flagged during the Assess Phase by all participants
The subgroup discussed these to see if they were covered by previous work (e.g.
as well as what assurance(s) had already been given by the team on
The ethical issues were then given distinctive titles
and narratives as needed to make sure they did not overlap with each other
The two subgroup leaders then gave these updated descriptions to the Z-Inspection® lead
where they were joined with the descriptions provided by other subgroups
While this use case directly refers to the use of ML as a supportive tool to recognize cardiac arrest in emergency calls
there are various ways in which the findings of this qualitative analysis could be applicable to other contexts
the general framework for achieving trustworthy AI sets out in the HLEG AI guidelines proved to be an adequate starting point for a specific case study discussion in the healthcare domain
the ethical principles of the HLEG AI guidelines need some context-specific specifications
this contextualization and specification can successfully be undertaken by an interdisciplinary group of researchers that together is able to not only bring in the relevant scientific
and technological expertize but also to highlight the various facets of the ethical principles as they play out in the respective case
This layered approach allowed to minimize (cognitive) biases in our assessment approach
it was not possible that one view would influence all participants
as it would potentially happen in one meeting with all experts
REQUIREMENT #1 Human Agency and Oversight
REQUIREMENT #2 Technical Robustness and Safety
Sub-requirements: Resilience to Attack and Security
REQUIREMENT #3 Privacy and Data Governance
REQUIREMENT #6 Societal and Environmental Well-Being
Sub-requirements: Environmental Well-Being
To help the process, especially as a help to experts who might not have sufficient knowledge in ethics, we used a sample of catalog of predefined ethical tensions. We have chosen the catalog defined by the Nuffield Foundations (Whittlestone et al., 2019)
When a specific “issue” did not correspond to one or more of the predefined ethical tensions
experts described them with their own words
The results of four WGs are then “merged” into one consolidated mapping using a consensus workshop
We present here the consolidated mapping for this use case
It is unclear whether the dispatcher should be advised or controlled by the AI
and it is unclear how the ultimate decision is made
Respect for Human Autonomy > Human Agency and Oversight > Human Agency and Autonomy
any use of an AIsystem in the healthcare system needs to be accompanied by a clear definition of its use
it is unclear how the decision support tool
Should they defer to the tool’s decision (especially since the performance seems to surpass human capabilities)
We also need to take into account that the dispatchers in Denmark are highly trained professionals that will not easily defer to an automated tool without a certain level of clinical validation and trust in the system
Despite the fact that the dispatchers are the primary users
they were not involved in the system design
To what extent is the caller’s personally identifying information protected
and who has access to information about the caller
Prevention of Harm > Privacy and Data Governance > Privacy Prevention of Harm > Privacy and Data Governance > Data Governance
The main issue here is whether and how the data can be identified and traced back to particular stakeholders
The study participants claimed to follow the GDPR standards put in place by the EU
which in this case did help specify the respective roles of the dispatcher
these descriptions must be augmented by protections that further specify how data will be used and stored
for how long this will occur before its disposal
and what form(s) of anonymization will be maintained so that only trusted
legitimized parties can access the identifying information directly
It is unclear whether the study participants should be the medical dispatchers
Respect for Human Autonomy > Fundamental Rights > Human Agency and Autonomy
There is a question of whether a research ethics board should have reviewed the study; the need for an ethical approval was waived here by the research ethics committee in the Capital Region of Denmark
Written informed consent was only obtained by the medical dispatchers
there is the question of whether there should have been a formal ethical review and a community consultation process
or a form of surrogate or deferred consent
to address the ethical implications regarding trial patients
as is common in comparable studies reviewed by institutional review boards in the United States and United Kingdom
The training data is likely not sufficient to account for relevant differences in languages
Non-Discrimination and Fairness > Avoidance of Unfair Bias
There is likely empirical bias since the tool was developed in a predominantly white Danish patient group
It is unclear how the tool would perform in patients with accents
There is also a concern that this tool is not evaluated for fairness with respect to outcomes in a variety of populations
non-native speakers of Danish may not have the same outcome
It was reported that Swedish and English speakers were well represented but would need to ensure a broad training set
It would also be important to see if analyses show any bias in results regarding age
The concern is that the training data may not have a diverse enough representation
The tool’s characteristic performance
such as a higher rate of false positives compared to human dispatchers
could adversely affect health outcomes for patients
Prevention of Harm > Technical Robustness and Safety > Accuracy
The algorithm did not appear to reduce the effectiveness of emergency dispatchers but also did not significantly improve it
has a higher sensitivity but also leads to more false positives
There should be a firm decision on thresholds for false positive vs
The risk of not doing CPR if someone needs CPR exceeds the risk of doing CPR if not needed
excessive false positives put a strain on healthcare resources by sending out ambulances and staff to false alarms
This potentially harms other patients in need of this resource
The gold standard to assess whether the tool is helpful for the given use case is to analyze its impact on outcome
the low likelihood of survival from out of hospital cardiac arrest
there wasn’t an analysis attempting to assess the impact on survival
as it would take years in a unicentric study
leading to challenges when dispatcher and tool are in disagreement
Explicability > Transparency > Explainability
outcomes are based on a transcription of the conversation between dispatcher and caller
It is not clear what is used from these transcripts to trigger an alert
This lack of transparency may have contributed to the noted lack of trust among the dispatchers
as well as the limited training of the users
there is a lack of transparency regarding whether and which value judgments went into the design of the model
Such value judgments are important because explaining the output is partly a matter of accounting for the design decisions that humans have made
The model may suffer from security vulnerabilities due to cyber attacks
Prevention of Harm -> Technical Robustness and Safety -> Resilience to Attack and Security
The data should also be adequately protected against potential cyber-attacks
it seems hard to determine resistance to adversarial attack scenarios
The added value of the system to particular stakeholders is not clear
Prevention of Harm > Societal and Environmental Wellbeing > Impact on Society at Large or Democracy
The AI system did not significantly improve the dispatcher’s ability to recognize cardiac arrests
AIs should improve medical practice rather than disrupting it or making it more complicated
How much improvement is needed to conclude that an AI system should be deployed in clinical practice
Will it be cost-effective (worth the electric bill
The trials conducted did not include a diverse group of patients or dispatchers
Non-Discrimination and Fairness > Stakeholder Participation
Clinical trials are rare in the field of AI and are certainly welcomed
The design of the trial needs to be carefully considered and thoroughly thought through in consideration of stakeholder priorities
It is unclear whether the tool is a medical device or not and whether its safety was sufficiently assessed by the involved ethics committees and authorities
Prevention of Harm > Technical robustness and safety > General safety
It is unclear whether the tool is a medical device or not
it is also unclear whether the clinical studies should have fallen under medical device regulation
It is thus also unclear whether the Danish authorities and the involved ethics committees assessed the safety of the tool sufficiently
This is the list of the consolidated tensions
Description: Autonomous AI system may or may not be more accurate than with the interaction of a human dispatcher (support system)
the better the AI system will likely perform
there is a challenge to adequately protect and maintain the privacy of the individuals
Description: There is a question of who is the participant - the dispatcher and/or caller/patient
their autonomy should be respected and informed consent be obtained according to best practices for emergency medicine
Description: The algorithm is accurate on average but may systematically discriminate against specific minorities of callers and/or dispatchers due to ethnic and gender bias in the training data
Description: There is a risk of incorrect diagnosis and intervention arising from false positives
relative to that provided by human dispatchers
Description: The tool lacks explainability but explainable AI systems may be less accurate than non-interpretable models
Description: The system should be transparent and available to various stakeholders
but also must have safeguards to resist external threats that may limit transparency conditions
Description: AI systems should be effective and improve medical interventions without unnecessary disruption (e.g.
Description: Clinical trials are rare in the AI field
but could ensure that the devices are safe and effective
there is a tension that clinical trials are time-consuming and costly and cannot be provided by manufacturers to the necessary degree
A challenge is related to making the abstract principles formulated in the guidelines applicable to the respective case study
This almost certainly will involve some narrowing down of the broad concepts (such as autonomy or privacy) reflected in the principles to conceptions that prove useful in the case study
both for time constraints and in view of the interdisciplinary nature of the group of researchers
and legal bandwidth and depth of the discussion
Despite the broad multidisciplinary expertize of our Z-Inspection® assessment team
it was surprising how challenging it was to map broad general guidelines to a concrete and specific use case
This highlights that all institutions tasked with assessing and regulating AI in healthcare products should exhibit two important characteristics: On the one hand
flexibility in assessing the solution at hand
It is likely that different solutions will have very different advantages and challenges
Hardcoding certain requirements into regulation is thus probably not a recommended way forward
this requires broad and wide expertize in all areas related to AI in healthcare
we will work on task IV verification of requirements of the Assess Phase
the goal is to start from the list of consolidated ethical and technical and legal issues
give feedback to the expert teams so that they can possibly revise the final list of ethical issues and tensions
To verify claims, we plan to use a mixed approach, consisting in adapting concepts from the Claims, Arguments, Evidence (CAE) framework and using the ALTAI web tool. CAE is often used as a framework in aviation, nuclear, and defense industries to reason about safety, security, reliability, and dependability. Recent work has begun applying CAE to the safety analysis of AI systems (Brundage et al., 2020; Zhao et al., 2020)
We will adjust the concepts to apply to the seven requirements for trustworthy AI
The ALTAI web tool (AI HLEG, 2020) is an interactive general self-assessment beta prototype tool based on the EU trustworthy AI framework
The tool is not specific to the domain of healthcare
It gives only general recommendations and produces a generic score
We will adapt the general recommendations resulting from the tool and take into account the results of the verification phase
the final list of ethical issues and tensions
and then produce specific recommendations relevant for the domain of healthcare
The output of the assessment will be a report containing recommendations to the key stakeholders
Such recommendations should be considered a source of qualified information that help decision makers make good decisions
and that help the decision-making process for defining appropriate trade-offs
They would also help continue the discussion by engaging additional stakeholders in the decision-process
We list here a preliminary list of recommendations for this use case
which are easy for stakeholders to understand and provide different levels of interpretation for judging the relevance of an individual prediction
explanation may involve words that were more predictive
the use of the ML system could be confined to those cases that are inside the training distribution
This would allow the dispatchers to better identify the cases where to rely on their own decision-making and prevent them from being overturned by the system when the system lacks reliability
This approach would increase dispatcher autonomy and could improve the overall outcome
The group of (potential future) patients and (potential future) callers could be interested in how the system functions and is developed
User involvement/stakeholder involvement could be very helpful in the process of re-designing the AI system
Recommendation 4: It is important to learn how the protocol (what questions
etc.) does or does not influence the accuracy of the ML output
Further research work should be performed to answer this question
The goal should be to responsibly integrate the classifier into the context of the dispatcher calls rather than just have it passively observe the call and make “trustworthy” recommendations
This requires reimagining the context of the calls themselves (with new protocols
Recommendation 5: Although we did not assess the legal aspects of the AI system
we suggest to the prime stakeholder to verify with legal local competent authorities if the AI system needed a CE-certification as a medical device
according to the definition of current regulation Medical Device Directives (MDD)
In the new forthcoming Medical Device Regulation (MDR) in the EU
“software that is used for human beings for the medical purpose of prediction or prognosis of disease will be classified as a medical device.” Under the MDR
the AI system will be classified as medical device
and it would therefore need a CE-certification
one of the main problems debated in ethics
is the possible complete “replacement” of the human decision-making capacity
This is recognition of the principle of human dignity
and the principle of non-maleficence (do no harm to humans) and beneficence (do good to humans) in ethics
Human involvement in design and construction is not enough to discharge this concern
Humans need to maintain a level of control and oversight over the AI
allowing it to cognitively assist human decisions
Machines should not compete but complete human actions
The AI HLEG trustworthy AI guidelines requirement for human agency and oversight tacitly acknowledges this debate and seeks to ensure that AI is used to inform decision making
The AI HLEG trustworthy AI guidelines specifically recognize a “right not to be subject to a decision based solely on automated processing when this [...] significantly affects them” (AI HLEG, 2019
a certain level of human involvement is necessary
the first issue to consider from a human agency and oversight perspective will be what the appropriate level of human involvement is
Different levels of human oversight in AI have been categorized as human in the loop, human on the loop, or human in command. Human in command describes a high level of human involvement, with human in the loop and on the loop incrementally less. To the extent a high level of human involvement is deemed necessary from an ethical standpoint, this will necessarily reduce some of the benefits the AI system was intended to bring (Hickman and Petrin, 2020)
minimal human oversight raises concerns regarding monitoring of the accuracy of the AI system and potential harm and liability that could result
There is a trade-off between the efficiency gains of the AI and the ability to oversee its decisions
The appropriate balance for this trade-off needs to be part of the assessment
A second issue relating to human involvement is the impact of the presence of AI in the process and its perception by the humans charged with overseeing it (supervisors) and processing its output (dispatchers)
if there is a perception that the AI is not often wrong
heuristics suggest that they will be less likely to spot the anomalies when they arise
In the randomized clinical trial, less than one in five alerts were true positives, raising the possibility of alert fatigue, which could result in true alerts being ignored. These potential issues, and others like them, will need to be identified and methods to resolve them will need to be explored. Such resolutions could include, for example, (Blomberg et al., 2021) a heuristic tool that informs the dispatcher when and when not to rely on the system
to put a disclaimer in place in certain situations
which is characterized by our social relations
requires that we should be aware of whether and when we are interacting with a machine or another human being and that we reserve the right to vest certain tasks to the human or the machine
the ethics of AI is the ethics of human beings: the machine cannot obscure the agency
and humans should be kept at the center (human-centric approach)
The assessment of trustworthy AI cannot take all these manifold legal implications into account
and it therefore cannot replace a proper legal
it can and does provide a plausibility check with a focus on the most specific legal challenges
(See section on Possible Accountability Issues)
Directorate General for Justice and Consumers (2019))
Such liability can be both civil and criminal in nature
this could be based on contractual and non-contractual theories
with the latter including general tort law and – if
an AI system is or will be classified as a product in the legal sense – product liability principles
The latter are governed both by harmonized EU principles and national/domestic law
there will likely be new laws specifically geared towards liability for AI
the European Parliament has recently drafted a resolution with recommendations to the Commission concerning civil liability relating to an liability for AI (European Parliament resolution of 20 October 2020 with recommendations to the Commission on a civil liability regime for artificial intelligence (2020/2014(INL))
In terms of relevant parties, individuals and/or legal entities involved in designing, developing, importing, distributing, selling, and using AI, among other roles, are potentially exposed. Indeed, we could even imagine that not using AI may – if not now, then at least in the future – lead to liability (Price et al., 2019)
parties that are potentially exposed to liability include (in addition to the parties mentioned above) the hospital
it is even possible to imagine scenarios where the person who called the emergency number or other bystanders might face liability for incorrectly or unnecessarily attempting resuscitation of patients
It is important to note that liability issues
as well as any other legal issues relevant to AI and its specific use cases
that is governed by local laws in the relevant country or countries to which there are pertinent connections (such as the AI is being used there
it is therefore critical for those involved in the use case discussed herein to ensure that an in-depth legal analysis by lawyers qualified to advise on legal matters in the relevant jurisdiction(s) and with specialist knowledge in the various subject areas is conducted
and that any insights therefrom will be considered when implementing the AI system
The Danish executive order implements the EU Council ‘Medical Device Directive 93/42/EEC’ into Danish law. The responsible authority is the Danish Medicines Agency (DKMA) under the Danish Health Authority. According to DKMA, no requirements in the Danish Executive order go beyond what is stated in the EU directive11
Importantly, as stated, whether a device is within the definition of the executive order depends on the intended purpose of the device, which is defined by the manufacturer. If so, the device “must fulfill the requirements in the applicable legislation and classified according to risk and CE marked”13
Some of these requirements are: a process for the development and design of safe products; clinical evaluation; risk analysis; labeling and information about the manufacturer; instructions in Danish; and an established market surveillance system
Among the various audit frameworks, ethics as a service approaches or impact assessment tools for AI systems, every one of them has its limitations and shortcomings. With Z-Inspection®, it is no different. Although the method has several great strengths (Zicari et al., 2021)
among them the lack of conflicting interests on the side of the members
The evaluation cannot guarantee that the organization administering the AI system in question necessarily sticks to the recommendations that are given
since participation in the inspection is voluntary
organizations come with a high openness for proposed changes
A requirement for AI systems that is becoming more and more salient is that their computing power should be also estimated (Strubell et al., 2019)
during the assessment we had no access to information on the energy requirements during model training
and therefore we cannot give recommendations in this respect
The best practice defined in this paper illustrates that our holistic and multidisciplinary evaluation process can be used to evaluate the risks and identify the ethical tensions arising from the use of the AI system and can also be used to improve current or future versions of an AI system
AI systems can raise ethical and societal concerns from direct stakeholders
and from indirect stakeholders such as politicians or general media
The nature of these concerns can vary and include a vast array of topics like data security
The interdisciplinary approach of the evaluation can help to identify these concerns in many different fields
Evaluation of AI development with a holistic approach like Z-Inspection® creates benefits related to general acceptance or concerns inside and outside the institution that applies an AI project
The approach can improve the quality of the project’s processes and increase transparency about possible conflicts of interest
which improves the quality of communication for any kind of stakeholder
communicating the evaluation process itself can help reinforce trust in such a system by making its exact workings transparent
and executive teams explain their decisions about funding and governing decisions as well as the system’s operation
An important lesson from this use case is that there should be some requirement that independent experts can assess the system before its deployment
This seems to be relevant in order to determine its trustworthiness in the first place as a means toward sociotechnical validation of the AI system
One way to understand this use case we have been looking at is to see the “medical discussions” as possible forms of validation metrics rather than simply mechanisms for “verifiable claims,” as OpenAI has recently argued (Brundage et al., 2020)
Instead of formal verification procedures concerned with matching the model to the proposed specification, empirical validation would investigate whether the specification itself is well-founded, treating the model parameters as “hypotheses” that must be tested against real-world conditions under controlled settings (Dobbe et al., 2019)
So the question for our inspection isn’t just making AI systems that are trustworthy
but making sure the proposed trustworthiness definition actually matches the expectations of affected parties
and drawing attention to the way particular use cases highlight discrepancies within and across stakeholder groups and point to a need for further validation and regulation through clinical standards
While this use case directly refers to the use of machine learning as a supportive tool to recognize cardiac arrest in emergency calls
the general framework for achieving trustworthy AI set out in the HLEG AI guidelines proved to be an adequate starting point for a specific case study discussion in the healthcare domain
the ethical principles of the HLEG AI guidelines need some context-specific specification
this contextualization and specification can successfully be undertaken by an interdisciplinary group of researchers that together are able to not only bring in the relevant scientific
medical and technological expertize but also to highlight the various facets of the ethical principles as they play out in the respective case
There is a danger that a false or inaccurate inspection will create natural skepticism by the recipient
There are also legal issues (some of which are addressed in the Human-Machine interaction and Legal perspective Section)
This is a well-known problem for all quality processes
We alleviated it using an open development and incremental improvement to establish a process and brand (“Z-Inspected”)
colleague and friend Naveed Mushtaq has passed away on December 27
after suffering a sudden cardiac arrest a few weeks before
Proprietary datasets were analyzed in this study
This data can be obtained on request by contacting the authors
MW were the leaders of the four working groups
and were responsible for producing the consolidated mapping of Ethical Issues and Tensions; EH
MP prepared the Supplementary Materials; paper preparation—RZ; paper editing—SG
SG was supported by a grant from the Collaborative Research Program for Biomedical Innovation Law
a scientifically independent collaborative research program supported by a Novo Nordisk Foundation grant (NNF17SA0027784)
JA received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No
TH was supported by the Cluster of Excellence “Machine Learning—New Perspectives for Science” funded by the Deutsche Forschungsgemeinschaft (DFG
German Research Foundation) under Germany’s Excellence Strategy—Reference Number EXC 2064/1—Project ID 390727645
All other authors did not receive any funding (neither private nor public) to conduct this work
Author RW was employed by the company Fitbiomics
Author DO was employed by the company Inrupt
The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest
Anne Riechert and Leonardo Espinosa-Leal for proving valuable comments on this manuscript
and Norman Stürtz for their invaluable contribution to the definition of the Z-Inspection® process
1Z-Inspection® is a registered trademark
2Another relevant document at EU level is the European Group on Ethics in Science and New Technologies (EGE) at the European Commission, Statement on AI, Robotics and “Autonomous Systems,” Brussels, March 2018 https://ec.europa.eu/research/ege/pdf/ege_ai_statement_2018.pdf
3In the EGE document
Human Dignity (meaningful human control and awareness with the interaction with humans or machines); 2
4To describe inclusion of different scientific disciplines in the same project, various terms exist, such as multidisciplinary, transdisciplinary, and interdisciplinary (Frodeman et al., 2012; Budtz Pedersen et al., 2015)
Our approach is most accurately described as “interdisciplinary” since the research is developed in between disciplines about a research question formulated from within the research group
6https://www.retsinformation.dk/eli/lta/2019/903
7https://en.stps.dk/en/
8The principle of Meaningful Human Control was first suggested in the field of weapon systems
This means that humans - and not computers and their algorithms - should ultimately remain in control
it is also used with reference to human oversight
9https://ec.europa.eu/docsroom/documents/17921
10https://www.retsinformation.dk/eli/lta/2008/1263
11https://laegemiddelstyrelsen.dk/en/devices/legislation-and-guidance/guidance/guidance-for-manufacturers-on-health-apps-and-software-as-medical-devices/#
12https://laegemiddelstyrelsen.dk/en/devices/legislation-and-guidance/guidance/guidance-for-manufacturers-on-health-apps-and-software-as-medical-devices/
13https://laegemiddelstyrelsen.dk/en/devices/legislation-and-guidance/guidance/guidance-for-manufacturers-on-health-apps-and-software-as-medical-devices/
The Reasonable Robot: Artificial Intelligence and the Law
CrossRef Full Text
AI, HLEG (2019). High-Level Expert Group on Artificial Intelligence. Ethics guidelines for trustworthy AI [Text]. European Commission. Available at: https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai
AI, HLEG (2020). High-Level Expert Group on Artificial Intelligence. Assessment List for Trustworthy Artificial Intelligence (ALTAI) for self-assessment [Text]. European Commission. Available at: https://ec.europa.eu/digital-single-market/en/news/assessment-list-trustworthy-artificial-intelligence-altai-self-assessment
Bærøe
How to Achieve Trustworthy Artificial Intelligence for Health
PubMed Abstract | CrossRef Full Text | Google Scholar
Global Incidences of Out-Of-Hospital Cardiac Arrest and Survival Rates: Systematic Review of 67 Prospective Studies
PubMed Abstract | CrossRef Full Text | Google Scholar
CrossRef Full Text | Google Scholar
Effect of Machine Learning on Dispatcher Recognition of Out-Of-Hospital Cardiac Arrest during Calls to Emergency Medical Services
PubMed Abstract | CrossRef Full Text | Google Scholar
Machine Learning as a Supportive Tool to Recognize Cardiac Arrest in Emergency Calls
PubMed Abstract | CrossRef Full Text | Google Scholar
Brundage, M., Avin, S., Wang, J., Belfield, H., Krueger, G., Hadfield, G., et al. (2020). Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims. ArXiv:2004.07213 [Cs]. Available at: http://arxiv.org/abs/2004.07213
CrossRef Full Text
Kampen Om Disciplinerne: Viden Og Videnskabelighed I Humanistisk Forskning
Buolamwini, J., and Gebru, T. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. New York, NY: Conference on Fairness, Accountability and Transparency, 77–91. Available at: http://proceedings.mlr.press/v81/buolamwini18a.html
PubMed Abstract | CrossRef Full Text | Google Scholar
Improving Survival from Sudden Cardiac Arrest: The “Chain of Survival” Concept
A Statement for Health Professionals from the Advanced Cardiac Life Support Subcommittee and the Emergency Cardiac Care Committee
Deloitte (2020). The Socio-Economic Impact of AI in Healthcare, 48. Available at: https://www.medtecheurope.org/resource-library/the-socio-economic-impact-of-ai-in-healthcare-addressing-barriers-to-adoption-for-new-healthcare-technologies-in-europe/
(DHR) Dansk Hjertestopregister (2020). (n.d.). Dansk Hjertestopregister. Retrieved February 8, 2021, from Available at: https://hjertestopregister.dk/
PubMed Abstract | CrossRef Full Text | Google Scholar
Diagnosis of Out-Of-Hospital Cardiac Arrest by Emergency Medical Dispatch: A Diagnostic Systematic Review
PubMed Abstract | CrossRef Full Text | Google Scholar
“Ethical Maintenance of Artificial Intelligence Systems,” in Artificial Intelligence for Sustainable Value Creation
Google Scholar
Eisenberg, M., Lippert, F. K., Castren, M., Moore, F., Ong, M., Rea, T., et al. (2018). The Global Resuscitation Alliance. doi:10.7591/9781501719783 https://www.globalresuscitationalliance.org/wp-content/pdf/acting_on_the_call.pdf
2018 Update from the Global Resuscitation Alliance
CrossRef Full Text | Google Scholar
Europe, Med. Tech. (2019). Trustworthy Artificial Intelligence (AI) in Healthcare. Available at: https://www.medtecheurope.org/resource-library/trustworthy-ai-in-healthcare/
European Commission (2019). Directorate General for Justice and ConsumersLiability for Artificial Intelligence and Other Emerging Digital Technologies. Brussels, Belgium: Publications OfficeAvailable at: https://data.europa.eu/doi/10.2838/25362
European Parliament
& Council of European Union (1993)
Council Directive 93/42/EEC of 14 June 1993 Concerning Medical Devices
Google Scholar
European Parliament
& Council of European Union (2016)
GDPR) Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data
and Repealing Directive 95/46/EC (General Data Protection Regulation)
Google Scholar
Privacy and Accountability in Black-Box Medicine
Google Scholar
The Need for a System View to Regulate Artificial Intelligence/machine Learning-Based Software as Medical Device
PubMed Abstract | CrossRef Full Text | Google Scholar
“Ethical and Legal Challenges of Artificial Intelligence-Driven Healthcare,” in Artificial Intelligence in Healthcare
CrossRef Full Text | Google Scholar
Gräsner
Survival after Out-Of-Hospital Cardiac Arrest in Europe - Results of the EuReCa TWO Study
PubMed Abstract | CrossRef Full Text | Google Scholar
On the Ethics of Algorithmic Decision-Making in Healthcare
PubMed Abstract | CrossRef Full Text | Google Scholar
Hagendorff, T., and Meding, K. (2020). Ethical Considerations and Statistical Analysis of Industry Involvement in Machine Learning Research. ArXiv:2006.04541 [Cs]. Available at: http://arxiv.org/abs/2006.04541
CrossRef Full Text
The Frequency and Consequences of Cardiopulmonary Resuscitation Performed by Bystanders on Patients Who Are Not in Cardiac Arrest
PubMed Abstract | CrossRef Full Text | Google Scholar
Early Cardiopulmonary Resuscitation in Out-Of-Hospital Cardiac Arrest
PubMed Abstract | CrossRef Full Text | Google Scholar
Havtorn, J. D., Latko, J., Edin, J., Borgholt, L., Maaløe, L., Belgrano, L., et al. (2020). MultiQT: Multimodal Learning for Real-Time Question Tracking in Speech. ArXiv:2005.00812 [Cs, Eess]. Available at: http://arxiv.org/abs/2005.00812
Trustworthy AI and Corporate Governance - the EU's Ethics Guidelines for Trustworthy Artificial Intelligence from a Company Law Perspective
CrossRef Full Text | Google Scholar
in Pragmatist Ethics for a Technological Culture (Springer Netherlands)
CrossRef Full Text
Shortening Ambulance Response Time Increases Survival in Out‐of‐Hospital Cardiac Arrest
PubMed Abstract | CrossRef Full Text | Google Scholar
Robots and Artificial Intelligence in Health Care
CrossRef Full Text | Google Scholar
Kiseleva, A. (2019). AI as a Medical Device: Is it Enough To Ensure Performance Transparency And Accountability In Healthcare? (SSRN Scholarly Paper ID 3504829). Soc. Sci. Res. Netw. Available at: https://papers.ssrn.com/abstract=3504829
Google Scholar
PubMed Abstract | CrossRef Full Text | Google Scholar
Guide to Clinical Practice Guidelines: The Current State of Play
PubMed Abstract | CrossRef Full Text | Google Scholar
Gender Imbalance in Medical Imaging Datasets Produces Biased Classifiers for Computer-Aided Diagnosis
PubMed Abstract | CrossRef Full Text | Google Scholar
Predicting Survival from Out-Of-Hospital Cardiac Arrest: A Graphic Model
PubMed Abstract | CrossRef Full Text | Google Scholar
Backpropagation Applied to Handwritten Zip Code Recognition
CrossRef Full Text | Google Scholar
Ethical Framework for Designing Autonomous Intelligent Systems
CrossRef Full Text | Google Scholar
Lippert, F. (2018). Emergency Medical Services Copenhagen—Implementation of a State-Of The-Art System. Available at: https://www.forum-rettungsdienst-bayern.de/images/praesentationen_2018/Lippert_EMS_Copenhagen_November_2018_Munich.pdf
Ethical Assessments of Emerging Technologies: Appraising the Moral Plausibility of Technological Visions
Imprint: Springer International Publishing Springer
CrossRef Full Text
A Unified Approach to Interpreting Model Predictions
Google Scholar
BIVA: A Very Deep Hierarchy of Latent Variables for Generative Modeling
Google Scholar
Life after Survival: Long-Term Daily Life Functioning and Quality of Life of Patients with Hypoxic Brain Injury as a Result of a Cardiac Arrest
PubMed Abstract | CrossRef Full Text | Google Scholar
Recognition of Out-Of-Hospital Cardiac Arrest by Medical Dispatchers in Emergency Medical Dispatch Centres in Two Countries
PubMed Abstract | CrossRef Full Text | Google Scholar
European Resuscitation Council Guidelines for Resuscitation 2015: Section 1
PubMed Abstract | CrossRef Full Text | Google Scholar
Complications of Bystander Cardiopulmonary Resuscitation for Unconscious Patients without Cardiopulmonary Arrest
PubMed Abstract | CrossRef Full Text | Google Scholar
Morley, J., Floridi, L., Kinsey, L., and Elhalal, A. (2019). From what to How: An Initial Review of Publicly Available AI Ethics Tools, Methods and Research to Translate Principles into Practices. ArXiv:1905.06876 [Cs]. Available at: http://arxiv.org/abs/1905.06876
Cognitive Impairments in Survivors of Out-Of-Hospital Cardiac Arrest: A Systematic Review
PubMed Abstract | CrossRef Full Text | Google Scholar
The Influence of the Probability of Survival on Patients' Preferences Regarding Cardiopulmonary Resuscitation
PubMed Abstract | CrossRef Full Text | Google Scholar
Global Resuscitation Alliance Utstein Recommendations for Developing Emergency Care Systems to Improve Cardiac Arrest Survival
PubMed Abstract | CrossRef Full Text | Google Scholar
NCBI (2021). (n.d.). Agonal Respiration (Concept Id: C2315245). from Available at: https://www.ncbi.nlm.nih.gov/medgen/746160 (Retrieved February 4
Dissecting Racial Bias in an Algorithm Used to Manage the Health of Populations
PubMed Abstract | CrossRef Full Text | Google Scholar
Those Designing Healthcare Algorithms Must Become Actively Anti-racist
PubMed Abstract | CrossRef Full Text | Google Scholar
European Resuscitation Council Guidelines for Resuscitation 2015
PubMed Abstract | CrossRef Full Text | Google Scholar
How Much Can Potential Jurors Tell Us about Liability for Medical Artificial Intelligence?
PubMed Abstract | CrossRef Full Text | Google Scholar
Potential Liability for Physicians Using Artificial Intelligence
PubMed Abstract | CrossRef Full Text | Google Scholar
The Legal and International Situation of AI
Robotics and Big Data with Attention to Healthcare
CrossRef Full Text | Google Scholar
in The Oxford Handbook of Interdisciplinarity (Oxford University Press)
Dispatcher Assessments for Agonal Breathing Improve Detection of Cardiac Arrest
PubMed Abstract | CrossRef Full Text | Google Scholar
PubMed Abstract | CrossRef Full Text | Google Scholar
Predictors of Survival from Out-Of-Hospital Cardiac Arrest: a Systematic Review and Meta-Analysis
PubMed Abstract | CrossRef Full Text | Google Scholar
and Teamwork in Medicine and Aviation: Cross Sectional Surveys
PubMed Abstract | CrossRef Full Text | Google Scholar
Strubell, E., Ganesh, A., and McCallum, A. (2019). Energy and Policy Considerations for Deep Learning in NLP. ArXiv:1906.02243 [Cs]. Available at:. doi:10.18653/v1/p19-1355 http://arxiv.org/abs/1906.02243
CrossRef Full Text
Regulating the Internet of Things: Discrimination
and Cybersecurity in the Artificial Intelligence Age
Google Scholar
Recognising Out-Of-Hospital Cardiac Arrest during Emergency Calls Increases Bystander Cardiopulmonary Resuscitation and Survival
PubMed Abstract | CrossRef Full Text | Google Scholar
Heart Disease and Stroke Statistics—2020 Update: A Report from the American Heart Association
PubMed Abstract | CrossRef Full Text | Google Scholar
Ethical and Societal Implications of Algorithms
and Artificial Intelligence: A Roadmap for Research
CrossRef Full Text
Do no Harm: A Roadmap for Responsible Machine Learning for Health Care
PubMed Abstract | CrossRef Full Text | Google Scholar
ONE Registry: a Prospective Observational Analysis over One Month in 27 Resuscitation Registries in Europe - the EuReCa ONE Study Protocol
PubMed Abstract | CrossRef Full Text | Google Scholar
A Safety Framework for Critical Systems Utilising Deep Neural Networks
doi:10.1007/978-3-030-54549-9_16 ArXiv:2003.05311 [Cs]
CrossRef Full Text | Google Scholar
Z-inspection: A Process to Assess Trustworthy AI
CrossRef Full Text | Google Scholar
Volland H and Kararigas G (2021) On Assessing Trustworthy AI in Healthcare
Received: 26 February 2021; Accepted: 19 May 2021;Published: 08 July 2021
distribution or reproduction in other forums is permitted
provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited
in accordance with accepted academic practice
distribution or reproduction is permitted which does not comply with these terms
*Correspondence: Roberto V. Zicari, cm9iZXJ0b0B6aWNhcmkuZGU=
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations
Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher
94% of researchers rate our articles as excellent or goodLearn more about the work of our research integrity team to safeguard the quality of each article we publish