Skip to content

Commit 65c83a1

Browse files
Merge branch 'ArmDeveloperEcosystem:main' into eksctl
2 parents 2c1b410 + bb71a7e commit 65c83a1

33 files changed

Lines changed: 62 additions & 20 deletions

File tree

content/learning-paths/embedded-and-microcontrollers/docker/_index.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,8 @@
11
---
22
title: Prepare Docker image for Arm embedded development
33

4+
description: Learn how to create a Dockerfile, build a Docker image with Arm Compiler for Embedded and Fixed Virtual Platforms, and test the containerized Arm development environment.
5+
46
minutes_to_complete: 30
57

68
who_is_this_for: This is an introductory topic for embedded software developers new to Docker.

content/learning-paths/embedded-and-microcontrollers/edge/_index.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,8 @@
11
---
22
title: Learn how to run AI on Edge devices using Arduino Nano RP2040
33

4+
description: Learn how to collect and preprocess audio data using Edge Impulse, train an audio classification model, and deploy it to the Arduino Nano RP2040 to control LEDs based on voice commands.
5+
46
minutes_to_complete: 90
57

68
who_is_this_for: This Learning Path is for beginners in Edge AI and TinyML, including developers, engineers, hobbyists, AI/ML enthusiasts, and researchers working with embedded AI and IoT.

content/learning-paths/embedded-and-microcontrollers/edge/program-and-deployment.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -345,17 +345,17 @@ These messages indicate that your model is working and processing voice input as
345345
## Record your voice to toggle the LED
346346
347347
- Wait for the **Recording...** message to appear on the Serial Monitor. This means the system is ready to capture voice input.
348-
- Say your command (for example, "on" or "off") clearly and promptly. The system records for only about one second.
348+
- Say your command (for example, "on" or "off") when prompted. The system records for only about one second.
349349
- The model will make a prediction based on your voice input and toggle the LED accordingly.
350-
- You can adjust the **threshold** values in the code to control how confident the prediction must be before the LED toggles. This helps fine-tune the systems responsiveness.
350+
- You can adjust the **threshold** values in the code to control how confident the prediction must be before the LED toggles. This helps fine-tune the system's responsiveness.
351351
- If the LED turns on when you say "on" and turns off when you say "off", your system is working correctly.
352352
353353
354354
## Serial Monitor output
355355
356356
Your Serial Monitor should look like the image below:
357357
358-
![example image alt-text#center](images/serial_monitor.png "Circuit Connection")
358+
![Arduino IDE Serial Monitor output showing inference results with Recording message, label predictions (noise, off, on), and corresponding confidence scores for each voice command#center](images/serial_monitor.png "Serial Monitor showing voice command inference results")
359359
360360
Congratulations, you’ve successfully programmed your first TinyML microcontroller! You've also built a functional, smart system to control an LED with your voice.
361361

content/learning-paths/embedded-and-microcontrollers/edge/software-edge-impulse.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -48,12 +48,12 @@ In the following sections, you'll walk through each key page on the Edge Impulse
4848

4949

5050

51-
![Screenshot of the Edge Impulse home page showing the main navigation and project dashboard alt-text#center](images/1.webp "Home page of Edge Impulse website")
51+
![Edge Impulse home page interface showing the main navigation menu at the top with Get Started, Platform, Resources, and Pricing options, and the central project dashboard area#center](images/1.webp "Home page of Edge Impulse website")
5252

5353

5454
## Create a new project
5555

56-
After you create your account and log in, the first step is to create a new project. Give your project a name that clearly reflects its purpose. This helps with easy identification, especially if you plan to build multiple models.
56+
After you create your account and log in, the first step is to create a new project. Give your project a name that reflects its purpose. This helps with easy identification, especially if you plan to build multiple models.
5757

5858
For example, if you're building a keyword-spotting model, you might name it `Wake word detection`.
5959

content/learning-paths/embedded-and-microcontrollers/edge_impulse_greengrass/noncameracustomcomponent.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ Within the 1.0.0 directory in S3, upload these files from your cloned repo (loca
3232
models.tar.gz
3333
samples.tar.gz
3434

35-
Next, we need to edit the EdgeImpulseRunnerRuntimeInstallerComponent.yaml and change the artifact location from "YOUR\_S3\_ARTIFACT\_BUCKET" to the actual name of your S3 bucket name (you'll see "YOUR\_S3\_ARTIFACT\_BUCKET" near the bottom of the yaml file). Save the file.
35+
Next, edit the EdgeImpulseRunnerRuntimeInstallerComponent.yaml and change the artifact location from "YOUR\_S3\_ARTIFACT\_BUCKET" to the actual name of your S3 bucket name (you'll see "YOUR\_S3\_ARTIFACT\_BUCKET" near the bottom of the yaml file). Save the file.
3636

3737
### 3. Create the custom component
3838

@@ -48,6 +48,6 @@ Finally, press "Create Component" and you should now have 2 custom components re
4848

4949
![CreateComponent](./images/GG_Create_NC_Component_3.png)
5050

51-
Awesome! Now that the non-camera support component is created, we can go back and continue with the deployment of these components to your edge device via the AWS IoT Greengrass deployment mechanism. Press "Return to Deployment Steps" below and continue!
51+
Now that the non-camera support component is created, return to the deployment steps and continue with deploying these components to your edge device via the AWS IoT Greengrass deployment mechanism. Select "Return to Deployment Steps" below to continue.
5252

5353
### [Return to Deployment Steps](/learning-paths/embedded-and-microcontrollers/edge_impulse_greengrass/customcomponentdeployment/)

content/learning-paths/embedded-and-microcontrollers/intro/_index.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,8 @@
11
---
22
title: Get started with Microcontrollers
33

4+
description: Learn where Arm architecture is used in microcontrollers and discover microcontroller hardware options for software development on Arm Cortex-M processors.
5+
46
minutes_to_complete: 10
57

68
who_is_this_for: This is an introductory topic for software developers working on microcontroller applications and new to the Arm architecture.

content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/_index.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,8 @@
11
---
22
title: Introduction to TinyML on Arm using PyTorch and ExecuTorch
33

4+
description: Learn what differentiates TinyML from other AI domains, explore Arm-based edge devices for TinyML, and set up a development environment using ExecuTorch and Corstone-320 Fixed Virtual Platform.
5+
46
minutes_to_complete: 40
57

68
who_is_this_for: This is an introductory topic for developers and data scientists new to Tiny Machine Learning (TinyML) who want to explore its potential using PyTorch and ExecuTorch.

content/learning-paths/embedded-and-microcontrollers/iot-sdk/_index.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,8 @@
11
---
22
title: Build and run Arm Total Solutions for IoT
33

4+
description: Learn how to build examples from the Open-IoT-SDK and run them on Corstone-300 virtual hardware to understand complete IoT software stack construction.
5+
46
minutes_to_complete: 30
57

68
who_is_this_for: This is an introductory topic for embedded software developers interested in learning how a complete IoT software stack is constructed.

content/learning-paths/embedded-and-microcontrollers/jetson_object_detection/_index.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,8 @@
11
---
22
title: Get started with object detection using a Jetson Orin Nano
33

4+
description: Learn how to set up a Jetson Orin Nano with a MIPI CSI-2 camera and perform real-time object detection from live video and image files using DetectNet and TensorRT.
5+
46
minutes_to_complete: 60
57

68
who_is_this_for: This is an introductory topic for developers interested in integrating object detection into their applications.

content/learning-paths/embedded-and-microcontrollers/linux-nxp-board/_index.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,7 @@
11
---
22
title: Use Linux on the NXP FRDM i.MX 93 board
3+
4+
description: Learn how to boot and configure the NXP FRDM i.MX 93 Arm board with Linux, create a user with sudo access, connect to WiFi using ConnMan, and transfer files over the network.
35

46
minutes_to_complete: 120
57

0 commit comments

Comments
 (0)