Hi,
I understand that you are trying to create a custom GYM environment with multi-agent system. To achieve this, you can use ‘rlMultiAgentFunctionEnv’ function, which was added in the R2023b release. You will have to install the Reinforcement learning toolbox to use this function.
This function requires you to define the observation and action specifications for your agents and to provide custom MATLAB functions for reset and step functions.
However, as this function was added in the release R2023b, it cannot be used in the earlier versions of MATLAB..
Here is an example of a custom multiagent reinforcement learning environment:
- Consider an environment containing two agents. The first agent receives an observation belonging to a four-dimensional continuous space and returns an action that can have two values, -1 and 1.
- The second agent receives an observation belonging to a mixed observation space with two channels. The first channel carries a two-dimensional continuous vector, and the second channel carries a value that is either 0 or 1. The action returned by the second agent is a continuous scalar.
- To define the observation and action spaces of the two agents, use cell arrays.
The below code shows how to do it:
obsInfo = { rlNumericSpec([4 1]) , [rlNumericSpec([2 1]) rlFiniteSetSpec([0 1])] };
actInfo = {rlFiniteSetSpec([-1 1]), rlNumericSpec([1 1])};
env = rlMultiAgentFunctionEnv(obsInfo,actInfo, @stepFcn,@resetFcn)
function [initialObs, info] = resetFcn()
initialObs = {rand(4,1), {rand(2,1) 1} };
function [nextObs, reward, isdone, info] = stepFcn(action, info)
nextObs = { rand([4 1])*norm(action{1}) , {rand([2 1])*norm(action{2}) 0} };
Following is the output of the above code:
Hope this helps!
Ronit Jain