THU | Machine Learning with ml5.js

THU | Machine Learning with ml5.js


  • November 7, 2024
  • 9:15–12:00
  • Room 3448 (Marsio)

Machine Learning #

Machine learning is a way to make machines do certain tasks without being explicitly programmed to do so.

ml5.js #

ml5.js is a JavaScript library that can be combined with p5.js to use and explore certain artificial intelligence and machine learning libraries in your projects.

ml5.js uses the Tensorflow library by Google. ml5.js is a library that attempts to make the usage of tensorflow a little bit easier.

The new version of the ml5.js library is divided into three parts:

  • ml5 Model:
    • The ml5 Model provides ready-to-use models that you directly feed inputs (image, video, audio, text, etc.) and get outputs (labels and confidence scores).
  • ml5 + Teachable Machine:
    • The ml5 + Teachable Machine allows you to create models with your own input (images, sound, and poses) using Teachable Machine in a fast, easy and accessible way, and import the trained models to ml5.
  • Train your own model!
    • The Train your own model! allows you to build and train your own machine learning models with your own data right using ml5 library.

These are the models that are available in ml5.js, explore the examples to understand what is possible.

ml5 Models #

ml5 + Teachable Machine #

Train Your Own Model #


Example: BodyPose #

Step #1: Prepare your sketch files #

The first step is to include the ml5.js library in your ìndex.html file. Add the following line inside the head part of the html file:

<script src="https://unpkg.com/ml5@1/dist/ml5.js"></script>

Here is an example sketch that we will build together:

// Open up your console - if everything loaded properly you should see the version number
// corresponding to the latest version of ml5 printed to the console and in the p5.js canvas.
console.log("ml5 version:", ml5.version);

let video;
let bodyPose;
let poses = [];

// draw emojis to where the eyes and nose are
let noseEmoji = "👃🏻";
let eyeEmoji = "👁️";

function preload() {
  // Load the bodyPose model
  bodyPose = ml5.bodyPose("MoveNet", {flipped: true});
}

// Callback function for when bodyPose outputs data
function gotPoses(results) {
  // Save the output to the poses variable
  poses = results;
}

function setup() {
  // the default size for video input is 640x480
  createCanvas(640, 480);

  // Create the video and hide it
  // the video is mirrored
  video = createCapture(VIDEO, { flipped: true });
  video.hide();

  // Start detecting poses in the webcam video
  bodyPose.detectStart(video, gotPoses);
  
  // set the text size and alignment for drawing the emojis
    textAlign(CENTER, CENTER);
    textSize(50);
}

function draw() {
  // Draw the webcam video
  image(video, 0, 0, width, height);
  // check that there is at least one person
  // Draw all the tracked landmark points
  for (let i = 0; i < poses.length; i++) {
    let pose = poses[i];
    
    // get the nose point and draw the nose emoji in that location
    let nosePoint = pose.nose;
    text(noseEmoji, nosePoint.x, nosePoint.y);
    
    // get the eye points
    let leftEyePoint = pose.left_eye;
    let rightEyePoint = pose.right_eye;
    text(eyeEmoji, leftEyePoint.x, leftEyePoint.y);
    text(eyeEmoji, rightEyePoint.x, rightEyePoint.y);
  }
}

More Resources #

Dan Shiffman has made a great tutorial for the BodyPose model