• Bhubaneswar India
  • Contact+ 91-9938772605
  • Mon - Sat : 10:00AM - 6:00PM

Tag: Responsibly

Marriott Bonvoy Events Launches ‘Connect Responsibly’ Initiative Globally

Bengaluru, November 6, 2024 – Marriott International, Inc. (Marriott) today announced the launch of Connect Responsibly with Marriott Bonvoy Events (Connect Responsibly), a program designed to help meeting planners embed sustainability into their events at participating hotels in the Marriott Bonvoy portfolio. Connect Responsibly plans to offer meetings and events customers access to detailed Meeting Impact Reports to measure the environmental impact of their events and select options to purchase carbon credits.

The program is anticipated to go live at managed and franchised properties from participating brands globally by the end of October. As part of the global launch, Connect Responsibly is expected to be available in nearly 133 hotels in India, and nearly 500 properties in Asia Pacific Excluding China.

“There is nothing like connecting in person, and doing so responsibly makes it that much better. With the Connect Responsibly program, we are giving our meetings and events customers options to better understand the impacts of their meetings as we collectively strive to create a more resilient future for travel,” said Erika Alexander, Chief Global Officer, Global Operations, Marriott International.

Fueled by growing demand for meeting solutions that address sustainability and informed by research and consumer insights from its global pilot program, Marriott is focused on offering a Meeting Impact Report through the Connect Responsibly program. Available following an event, the user-friendly Meeting Impact Report is intended to capture event details, property-specific sustainability practices implemented for the event, and the event’s carbon and water footprints, calculated through established hospitality industry methodologies. Marriott expects the Meeting Impact Report to be available in 11 different languages based on location.

In collaboration with South Pole, a carbon asset developer and climate consultancy, Marriott plans to offer meetings and events customers the ability to access select carbon offset projects. Through the Meeting Impact Report, these customers will have the option to utilize the South Pole online shop to choose from a range of carbon offset projects – verified by independent third-party organizations – that can be purchased as part of their event.

Meetings and events are important business for Marriott. Our customers are eager to participate in sustainability efforts. Connect Responsibly expands ongoing initiatives and strengthens our efforts focused on sustainability in hospitality,” said Tammy Routh, Senior Vice President, Global Sales, Marriott International. “We are excited to build on our sustainability reporting capabilities to provide our meetings and events customers with detailed Meeting Impact Reports and offer access to a select portfolio of verified carbon offset projects, through our collaboration with South Pole.”

This announcement is part of Marriott’s efforts to reduce greenhouse gas emissions at properties and in the supply chain. As of April 2024, Marriott is the largest global hospitality company to receive approval from the Science Based Targets initiative for both near-term and long-term science-based emissions reduction targets (SBTs). To drive progress toward its SBTs, Marriott launched the company’s Climate Action Program (CAP), which includes property-level carbon reduction goals and actions.

How To Use AI, Responsibly

In 2021, a group of researchers set out to quantify just how hot the topic of ethics of artificial intelligence had become. They searched Google Scholar for references to AI and ethics. What they found showed a remarkable uptick in this field. In the over three decades spanning 1985 and 2018, they found 275 scholarly articles focusing on the ethics of artificial intelligence. Those journals published 334 articles in 2019 alone – more than they had in the previous 34 years combined. In 2020, an additional 342 articles were published.

Research into AI ethics has exploded, and much of it has focused on guidelines for building AI models. Now, AI-based tools are widely available to the public. That’s left schools, businesses, and individuals to figure out how to use AI ethically – in a way that is safe, free of bias and accurate.

“Much of the public is not yet sufficiently informed or prepared to use AI tools in a fully responsible manner,” said IEEE Member Sukanya Mandal. “Many people are excited to experiment with AI but lack awareness of potential pitfalls around privacy, bias, transparency and accountability.”

HALLUCINATIONS AND INACCURACIES: THE BIGGEST PITFALLS FOR AI USERS

Because of the way they are built, most generative AI models are prone to hallucinations. They simply make things up, and the seemingly authoritative results give the appearance of confidence. That is a risk for users, who may pass on false information. In the U.S., lawyers using generative AI learned this lesson the hard way when they attempted to use chatbots to draft legal documents, only to discover that the AI made up nonexistent cases they cited as precedent in their arguments.

“AI may not always be accurate, so its information needs to be checked,” said IEEE President Tom Coughlin.

CAN WE TRUST THE DECISIONS AI MAKES?

Artificial intelligence models are trained on massive amounts of data, and sometimes they make decisions based on extremely complex mathematical functions that are difficult for humans to understand. Users often don’t know why an AI has made a decision.

“Many AI algorithms are ‘black boxes’ whose decision-making is opaque,” Mandal said. “But particularly for high-stakes domains like healthcare, legal decisions, finance and hiring, unexplainable AI decisions are unacceptable and erode accountability. If an AI denies someone a loan or a job, there must be an understandable reason.”

WHAT HAPPENS IF WE TRUST AI TOO MUCH?

Because AI models are trained on such large datasets, they could lull users into a false sense of confidence, causing them to accept decisions without question.

In “The Impact of Technology in 2024 and Beyond: an IEEE Global Study,” a recent survey of global technology leaders, 59% of respondents identified “inaccuracies and an overreliance on AI” as one of their organization’s biggest concerns when it came to the use of generative AI.

WHY IS IT IMPORTANT TO KNOW WHAT DATA WAS USED TO TRAIN AN AI MODEL?

Imagine this: An AI model used is trained to screen applicants for a job. It forwards resumes to hiring managers based on data collected over prior years and is trained to identify people most likely to get the job. Except, the industry has traditionally been male dominated. An AI could learn to identify women’s names, and thus automatically exclude those applicants, based not on their ability to do the job, but on their gender.

Such algorithmic biases can and do exist in AI training data, making it especially important for users to understand how models were trained.

“Ensuring unbiased data is a shared responsibility across the AI development lifecycle and an ongoing process,” Mandal said. “It starts with those sourcing data being cognizant of the risk of bias and using diverse, representative datasets. AI developers should proactively analyze datasets for bias. AI deployers should monitor real-world performance for bias. Ongoing testing and adjustment are needed as AI encounters new data. Independent audits are also valuable. No one can abdicate bias mitigation solely to others in the chain.”

SHOULD YOU TELL PEOPLE WHEN ARTIFICIAL INTELLIGENCE IS USED?

Disclosure is emerging as a key tenet of AI use. When an AI decides in healthcare, for example, patients should be told. And social media sites also require creators to disclose when AI was used to make or alter a video.