“Responsible AI: From Theory to Practice” was one of the highlights of the AWS GenAI Data Day on 15th October 2024. Led by an enigmatic and greatly enthused Andrew Ellul, Startups Solutions Architect at Amazon Web Services, the audience was taken through a riveting inside look into how AWS planned to navigate the recent hurdles the industry is faced with despite the emergence of and the immense opportunities presented by Generative AI.
AWS RESPONSIBLE AI IMPLEMENTATION-
Responsible AI Definition and Practice: Andrew Ellul started the session by highlighting new innovations on the horizon thanks to GenAI. However, LLMs are still known to lack transparency with 80% of companies admitting that there are few strategies in place that include the implementation of Responsible AI.
Generative AI offers transformative potential but also presents risks including biases and harmful outputs that have a direct impact on wider data privacy and intellectual property parameters as highlighted in Atlassian. To take full advantage of Responsible AI best practice while minimizing risks through the adoption of principles such as fairness, privacy and governance, Hunt explained how AWS integration with Snowflake is expected to deliver positive outputs. The LatestSale.com team would equally highlight the benefits of implementation field practices through Amazon SageMaker and Bedrock.
Human v automatic evaluation
Relevance Access
Language Style Robust
Brand Voice Toxicity
Coherence= Stats=
METHODS ALGORITHMS
A highly vocal audience debated the issues surrounding AI safety to minimize harm with the following key issues raised:
- How to deal with divisive opinions
- How to deal with debates on topical themes or controversial topics
- How to deal with issues surrounding medical advice
RESPONSIBLE AI INPUT to OUTPUT
Andrew Ellul attested that AWS had the AI tools in place to respond to the potential or ingestion of harmful content through its filtering assets especially for financial, drug-related or medical content.
Upon defining a given topic, users can use the AWS Redact (BEDROCK GUARDRAILS) feature enabling story/theme upload for AI model training followed by the removal of political content when not required from opinion pieces.
Verification parameters for foundational model testing would include assessments based on fairness, relevance, robustness and transparency deploying tools such as invisible watermarking that embeds watermarks into text without altering semantic meaning. These guardrails are destined to improve customer traceability and trust dynamics.
The new watermark detection capability of Amazon Titan Image Generator is currently widely accessible in Amazon Bedrock. By default, every image created by Amazon Titan has an undetectable watermark. Using natural language cues, the watermark detection method enables users to recognise photographs produced by Amazon Titan Image Generator, a foundation model that enables customers to produce realistic, studio-quality images in large quantities and at a reasonable cost.
RESPONSIBLE AI THEORY TO PRACTICE
According to the cybersecurity team at Payatu, safety measures, such as Cortex Guard, filter harmful content to minimize risks, while governance practices align with OWASP’s Top 10 vulnerabilities for LLMs. LLM outputs may result in security lapses, such as code execution that jeopardises systems, if they are not properly validated. In certain situations, this vulnerability may result in remote code execution on backend systems, Cross-Site Scripting (XSS), Cross-Site Request Forgery (CSRF), and Server-Side Request Forgery (SSRF).
Explainability-
“Can we trust sources? What about citations?” attendees asked. The Amazon Q built-in apparatus was earmarked by Ellut as a solution ripe for discovery.
Amazon Redshift upon further investigation, the first fully managed, petabyte-scale, enterprise-grade cloud data warehouse, was introduced by Amazon Web Services in 2013, completely changing the data warehousing market. Using pre-existing business intelligence tools to effectively analyse massive amounts of data was simplified and became more cost effective thanks to Amazon Redshift Re-Invented. Compared to old on-premise data warehousing solutions, which were costly, inflexible, and required a great deal of skill to configure and run, this cloud service represented a substantial advancement.
Concluding Notes: OWASP Deployment in Action
Educating developers, designers, architects, managers, and organisations about the possible security concerns associated with the deployment and management of Large Language Models (LLMs) and Generative AI systems is the overall goal of the OWASP Project (Open Worldwide Application Security Project) led by the Open Source Foundation for Application Security.
A variety of resources are offered by the initiative that Founders can cross check when building AI models. Particularly noteworthy is the OWASP Top 10 list for LLM applications , which identifies the top 10 most serious flaws frequently found in LLM applications and emphasises their possible consequences, ease of exploitation, and frequency in practical applications.
AMAZON PUTS IT WEIGHT BEHIND GenAI R&D
Amazon announced in November 2024 a whopping $110 million investment in generative AI research driven by universities. The initiative, called Build on Trainium, will give academics compute hours to develop new machine learning (ML) frameworks, AI architectures, and performance enhancements for massively distributed AWS Trainium UltraClusters.
In 2017, Amazon was the leading research and development (R&D) company in the world. Its investment in R&D was ten times greater than that of 2011, five times greater than that of 2012, and quadruple that of 2015. The subject of a new definition and focus for R&D in the digital economy as reported in Science Direct has been brought up by the sharp and quick rise in R&D spending. Amazon maintains that this definition and focus should encompass both “significant improvement” (labelled as R&D) and “routine or periodic alterations,” which are typically categorised as non-R&D.
Amazon pledges to keep working with the White House, lawmakers, tech companies, and the AI community to promote the safe and responsible application of AI, with Amazon Bedrock Guardrails, designed to implement safeguards for generative AI applications based on responsible AI policies playing a critical role in this endeavour.