YEAR 12 DIGITAL TECHNOLOGY
  • Home
  • Website
    • Term 1 Overview
    • Learn Basic HTML
    • Learn Basic CSS
    • HTML/CSS Advanced
    • Photoshop Level 2
    • Recap and Review
    • Conventions of Web Design
    • AS91893 - Digital Media Outcome >
      • AS91893 - Resources
  • Programming
    • Term 2 Overview
    • JavaScript Recap
    • JavaScript and HTML
    • Functions, Parameters and Returns
    • Learn Arrays
    • AS91896 - Brief
  • Databases
    • Term 3 Overview
    • Learn Access >
      • Intro to Access
      • Relationships and lookups
      • Relationships Extended
      • Queries and Reports
      • Mail Merge
      • Car Sales
      • Documentation x5
      • Importing Data
  • External
    • Computer Science External
    • A.I. 2024 >
      • 2024 Exam Update
      • Adoption of AI
      • Mechanisms & Techniques
      • Impact of AI
      • Key Issues
      • A.I. Practice Exams
    • Optional - Present a summary
    • External Rules
  • Innovation
    • Innovation Pathway
    • AS91891 - Concept Design
    • AS91897 - Advanced Outcome
    • AS91893 - Digital Media Outcome
    • AS91894 - Electronics
    • AS91896 - Programming
    • AS91368 - Brief
  • Freyberg Digital

Key Issues - 2024

This years exam

The hint for this years exam on what the key issues will be is listed here:
recent developments in large language models such as ChatGPT and Google Bard / Gemini, 

​
 With this in mind we predict that some of the questions might be:
What is a large language model?

What are some recent developments for LLM's?

How can businesses take advantage of LLM's?

What difficulties do LLM's encounter?

Answers should refer to Google Bard/ Gemini and Chat GPT.


What is a large language model?


What is ChatGPT?

ChatGPT is an AI-based conversational agent developed by OpenAI. It belongs to the GPT (Generative Pre-trained Transformer) family of models, which are designed to understand and generate human-like text based on the input provided to them. ChatGPT, specifically, is trained on a vast dataset containing a wide range of internet text, including conversations, articles, forum posts, and more. It leverages this training to engage in text-based conversations with users, providing responses that are contextually relevant and coherent. It can be used for a variety of tasks, including answering questions, providing recommendations, generating text, and engaging in dialogue on various topics.

- From Chat GPT

​

What is Google Bard/Gemini?

  • A family of large language models (LLMs): This is the core technology behind Gemini. These are advanced AI models trained on massive amounts of data to be informative and comprehensive. There are three main versions:
    • Gemini Nano: A smaller, less powerful version used in mobile apps like summarizing recordings on Pixel phones.
    • Gemini Pro: A mid-range version with more capabilities, available through some Google products.
    • Gemini Ultra: The most powerful version, capable of complex tasks and multimodal understanding (text, code, images, etc.) This version is currently available through a paid subscription plan.
  • The new name for Google's AI chatbot: I am actually a Gemini model! Previously called Bard, I was updated and renamed to Gemini in February 2024 to reflect the new generation of AI technology.
​

Difficulties with Language models

Top Controversies with ChatGPT:
https://readwrite.com/chatgpt-controversies-are-our-apprehensions-right-or-wrong/ 

https://www.forbes.com/sites/glenngow/2024/01/28/revealing-the-dark-side-the-top-6-problems-with-chatgpt-and-generative-ai-in-2024/?sh=4ee211f6349a

https://www.thefountaininstitute.com/blog/chat-gpt-ethics

Beyond Bias: Navigating the Evolution and Ethics of Google's Gemini genAI

"LLMs are rapidly advancing, but there are some key issues that researchers and developers are grappling with:
  • Bias and Fairness: LLMs are trained on massive datasets of text and code, which can reflect the biases present in that data. This can lead to outputs that are discriminatory or offensive.
  • Safety and Security: LLMs can be misused to generate harmful content, like disinformation or spam. There's also a risk of malicious actors using LLMs to create deepfakes or other manipulative media.
  • Explainability and Transparency: Understanding how LLMs arrive at their outputs can be difficult. This makes it challenging to debug errors or identify where biases might be creeping in.
  • Environmental Impact: Training large LLMs requires significant computing resources, which can have a large carbon footprint.
  • Job displacement: As LLMs become more sophisticated, there's a concern that they could automate some jobs currently done by humans.
These are complex issues, and researchers are actively working on solutions. Some potential areas of focus include:
  • Developing fairer datasets for training LLMs
  • Building in safeguards to prevent malicious use
  • Creating more interpretable models
  • Improving the efficiency of training processes
  • Exploring the potential for human-AI collaboration in the workplace
Overall, while LLMs hold immense potential, addressing these issues is crucial for ensuring their responsible development and use." - Google Gemini

​

Information from previous exams

In 2023 this was an answer to two questions:
Choose ONE of the following to answer:

Organisations have a choice of developing “weak AI” or “strong AI”. Explain why an organisation may choose one over the other. What are the risks and opportunities for an organisation changing from “weak AI” to “strong AI”?

OR

The Turing test originated in 1950. How likely is it that your chosen organisation’s artificial intelligence would pass the test? Discuss how relevant the test is in evaluating the effectiveness of your chosen organisation’s artificial intelligence.


The Turing test is something that is designed to see whether AI can pass as a human, if the AI has passed as a human, it is considered to be an advanced AI. The Turing test, designed by Turing himself, a person considered to be the godfather of artificial intelligence, is flawed in modern times. This test is flawed because it only applies to chatbots, AI designed to pass as a human.

This only applies to a very small number of AI, as AI is used to complete a wide variety of tasks, one such is the  facial recognition AI. This doesn’t apply to the Turing test as it isn’t a chatbot, it is not designed to pass as human, it is designed to identify people through a camera using computer vision. Computer vision is the act of a machine looking through a camera into the real world, making it capable of identifying objects, and in this case, faces. The Turing test is when a person communicates with 2 things, 1 human and 1 chatbot.

The person talking with them doesn’t know which is a human and which is the AI, the AI and the human talk with the person, trying to convince them that they are the human. The person then has to choose which is which, if they think the AI is human, then the AI has passed the Turing test. This applies to ELLA, the  chatbot AI. This chatbot is designed to mimic humans, to provide a service. ELLA the chatbot is currently unlikely to pass the Turing test, this is because it is still in the early stages of development, while being capable of having a conversation, it doesn’t completely understand human speech. ELLA uses natural language processing to communicate with people. Natural language processing allows AI to understand the meaning behind words by comparing the conversation to its database, this allows for more ‘human-like’ responses by the AI.

The Turing test is relevant in evaluating the effectiveness of ELLA the chatbot, this is because it is designed to be ‘lifelike’, so the closer to a human, the better. If ELLA passes the Turing test, it can be said that it is an effective AI, and that it is ready to be used nationwide. This is because if people can’t discern ELLA between AI and human, then ELLA can be used more for policing purposes such as becoming an emergency operator, giving the callers the same sense of trust they feel with a real human, instead of being cold to AI.

A breakdown on weak vs strong AI

Suspected Issues and Questions:

In order to combat this uncertainty I have developed a series of questions that could be presented.
Reflecting on these and researching them will possibly help when the exam comes out. I would recommend being able to answer some of these questions with at least two paragraphs
www.codemotion.com/magazine/ai-ml/devs-meet-ethics-ai-paradoxes-involving-autonomous-cars/
a.) A car brakes fail... It has the choice of crashing killing those on board or diverting and hitting into innocents, but saves those on board. How do we program A.I. to make correct decisions under these circumstances? Who is to blame legally if the A.I. gets it wrong?
https://hai.stanford.edu/news/designing-ethical-self-driving-cars
https://www.theatlantic.com/technology/archive/2013/10/the-ethics-of-autonomous-cars/280360/

b.) What are some of the wider impacts of driverless cars or drones? What are some of the benefits or disadvantages?

c.) What are some of the threats to humanity that could come from A.I.? Think about joblessness, Drone attacks, Terrorism

d.) How could machine learning or learning from humans be negative?

e.) Could their be safety issues with A.I.? how could A.I. make catastrophic mistakes that would hurt humans?

www.breitbart.com/europe/2022/08/16/one-dead-nine-injured-after-self-driving-car-veers-into-traffic
Powered by Create your own unique website with customizable templates.
  • Home
  • Website
    • Term 1 Overview
    • Learn Basic HTML
    • Learn Basic CSS
    • HTML/CSS Advanced
    • Photoshop Level 2
    • Recap and Review
    • Conventions of Web Design
    • AS91893 - Digital Media Outcome >
      • AS91893 - Resources
  • Programming
    • Term 2 Overview
    • JavaScript Recap
    • JavaScript and HTML
    • Functions, Parameters and Returns
    • Learn Arrays
    • AS91896 - Brief
  • Databases
    • Term 3 Overview
    • Learn Access >
      • Intro to Access
      • Relationships and lookups
      • Relationships Extended
      • Queries and Reports
      • Mail Merge
      • Car Sales
      • Documentation x5
      • Importing Data
  • External
    • Computer Science External
    • A.I. 2024 >
      • 2024 Exam Update
      • Adoption of AI
      • Mechanisms & Techniques
      • Impact of AI
      • Key Issues
      • A.I. Practice Exams
    • Optional - Present a summary
    • External Rules
  • Innovation
    • Innovation Pathway
    • AS91891 - Concept Design
    • AS91897 - Advanced Outcome
    • AS91893 - Digital Media Outcome
    • AS91894 - Electronics
    • AS91896 - Programming
    • AS91368 - Brief
  • Freyberg Digital