Unity ml agents python

Saved searches

Use saved searches to filter your results more quickly

You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session.

This is the workflow we collectively developed at the Media Design Master (HEAD–Genève) in order to build an ML Agents project from scratch using Unity 2020.LTS

abstractmachine/unity-ml-agents-tutorial

This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.

Name already in use

A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch?

Sign In Required

Please sign in to use Codespaces.

Launching GitHub Desktop

If nothing happens, download GitHub Desktop and try again.

Launching GitHub Desktop

If nothing happens, download GitHub Desktop and try again.

Launching Xcode

If nothing happens, download Xcode and try again.

Launching Visual Studio Code

Your codespace will open once ready.

There was a problem preparing your codespace, please try again.

Latest commit

Git stats

Files

Failed to load latest commit information.

README.md

This is the workflow we collectively developed at the Media Design Master (HEAD–Genève) in order to build an ML Agents project from scratch using Unity 2020.LTS

We are following these two tutorials from Unity:

  • Getting Started
  • Installation
  1. Install Unity 2020.4 LTS via Unity Hub
    • Always use the Unity Hub to install your prefered version of Unity on your computer
    • Avoid directly downloading this Unity version, install it from within Unity Hub instead
  2. Install Python (3.6.1 or higher)
    • Cf. Using Virtual Environment
  3. Clone the Unity ML Agent Examples from GitHub onto your computer using this command from your terminal: `git clone —branch release_16 https://github.com/Unity-Technologies/ml-agents.git
  4. On your computer, start the Unity Hub app
  5. From Unity Hub , Add the Projects sub-folder of the ml-agents clone you just downloaded
  6. From Unity Hub , open Projects
  7. Once this project is loaded in Unity, open Window > Package Manager and import ML Agents
  8. Open one of the examples: e.g. Project > Assets > ML-Agents > Examples > 3DBall

Install the Python ml-agents Tool

Follow the “Install ML Agents Python Package” instructions here: Installation

We had lots of Windows problems. This is (perhaps) a solution for Windows peoples:

mlagents-learn config/ppo/3DBall.yaml --run-id=HelloMediaDesign 

Training Our Own Project from Scratch

Make sure your Python installation of ML Agents is working (cf. above)

  • Create a new Unity Project (version 2020.4)
  • Window > Package Manager.
  • Change the Packages tab to : Unity Registry
  • Find “ML Agents” and click Install
  • Create a Scene (a plane, a primitive 3D shape…)
    • and create your Agent

    When you add a ray cast sensor, set your vector observation space size = 0. The idea here is that sensors add themselves automatically to the Behaviour Parameter‘s brain, so you do not need to declare them in your space size. If you add any observers in your code, you will need to add those to your count of the space size. Define your number of branches (= number of actions you’d like to be controlled by your agent). Define if these branches as discrete (like an int) or continuous (like a float).

    If you have more than one “type” or “branch” of actions, you have to enter the number of actions for each type. For example: if you have two branches, one for walking and the other for jumping, the first branch (“branch 0”) might have 4 discrete actions (left, right, forward, back), and the second branch (“branch 1”) might have two discrete actions (“jump”, “eat pasta”).

    • We will leave the Model empty for now, because we haven’t trained it yet
    • If required, add sensors to your agent or to it’s children. (eg ‘ray perception sensor’)
    • If you add a RayCast
    • add a Tag to the objects you want to be detected by the RayCast
    • To add a tag to an object, scroll down the tag’s tab and press “Add Tag”. Create your new tag and then assign your tag to the object by selecting it.

    In your Agent’s ‘Ray perception sensor’ component, write the name of your tag in Detectables Tag (NOT DRAG AND DROP and NOT THE OBJECT ITSELF) Create a C# script named “AgentTrainer” (or any other name you want)

    • Assign the script to your Agent
    • Open the script
    • Add : using Unity.MLAgents; into the libraries section
    • change Monobehavior to Agent
    • From Unity, Add a Decision Requester component to your Agent
    • Set the Decision period = 1

    This value of 1 is lazy design, but we’re just starting. That means every frame it will calculate everything which often isn’t very elegant or relevant. Later, you should activate the sensors/actions directly from within your code

    • In AgentTrainer script, add all the code required to train your behavior and to react once it has been trained
    • See code below
    • At the root of your main Unity project’s folder (the one that contains the “Assets” folder) create a folder TrainingConfig
    • Copy the config.yaml from another example project
    • Inside the file, change the name of the behavior to the name assigned at step 5.a.i
    • To start training, open Terminal
    • Go to the folder of the trainer config (cd /pathToTrainingConfig)
    • Line of code to start training in the terminal : mlagents-learn TrainerConfig.yaml —run-id=WhateverNameForYourBrain
    • Type “ls” to see files inside to be sure you have “config.yaml” file in the folder
    • Type in Terminal:log
    • If the unity logo appears, congrats! Just go to Unity and press play
    • Wait for training, make yourself a coffee
    • When training is done
    • Create a Models folder in your assets in Unity
    • Import the “NameOfTraining.onnx” file inside
    • Drag the onnx file to “model” in the Behaviour Parameters component on your Agent
    • Your agent has a brain now!
    • If you want to “see” the state of your training
    • Open up a second Terminal and point it to your project folder, i.e. the one containing your Assets folder
    • If your results folder is named results, type tensorboard —logdir results
    • Open the http: link using your navigator
    using System.Collections; using System.Collections.Generic; using UnityEngine; using Unity.MLAgents; public class AgentTrainer : Agent < public override void Initialize() < // all the instructions for how to setup // the simulation when we start the training >public override void OnEpisodeBegin() < // set the starting conditions for each Training Episode // where the Agent should be, its speed, its orientation, etc. >public override void Heuristic(float[] actionsOut) < // the heuristic method defines a human-defined intelligence // either "hard coded" or by user interaction. We control this brain // by directly modifying the "actionsOut" list of float values >public override void OnActionReceived(float[] observationList) < // we receive from the behavior parameters a list of float values // decide what to do with these values (run, jump, move, whatever) >public void OnCollisionEnter(Collision collision) < // robot did a bad thing. Give it a negative reward (i.e. punishment) if (collision.gameObject.tag == "BadThing") < AddReward(-1.0f); // reset this Learning episode and start a new one EndEpisode(); >> > 

    About

    This is the workflow we collectively developed at the Media Design Master (HEAD–Genève) in order to build an ML Agents project from scratch using Unity 2020.LTS

    Источник

    Читайте также:  Открыть новое окно selenium python
Оцените статью