Skip to main content
Setting environment secrets

Goals

  • Make a mathematics code execution environment using OpenReward and E2B.
  • Deploy the environment to OpenReward.
  • Sample from the environment using a model of your choice.

Prerequisites

Setup

Environments in OpenReward are written using ORS. ORS is implemented in the OpenReward Python library, and we will use it for this tutorial. You can install the library using pip or uv:
pip install openreward
You should also install the e2b Python SDK:
pip install e2b
## Introduction ORS environments can be configured to work with any sandbox provider. In this tutorial we will show how to initialise an E2B sandbox within an ORS environment. We’ll setup a simple mathematics environment and give an agent access to a sandbox for executing code. By the end of this tutorial, you will understand how to use E2B as an integration for making environments with ORS and OpenReward.

Getting Started

In the Your First Environment tutorial, we built a mathematics environment using the GSM8K dataset. We will use a similar dataset here, but this time we will give an agent an access to an E2B sandbox for code execution. First, let’s initialise our environment gsm8ksandbox:
orwd init gsm8ksandbox --template basic
cd gsm8k && ls
Next we’ll download the two parquet files from the GSM8K HuggingFace repository and put them in the root of our project:
test-00000-of-00001.parquet
train-00000-of-00001.parquet
A single row of data from the train set looks as follows:
{'question': 'Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May?', 'answer': 'Natalia sold 48/2 = <<48/2=24>>24 clips in May.\nNatalia sold 48+24 = <<48+24=72>>72 clips altogether in April and May.\n#### 72', 'id': '0'}
Next we’ll write a new server file. This will involve:
  • Loading the tasks from the parquet files
  • Verifying the answer is correct - we’ll use the MathVerify library for this.
from math_verify import parse, verify
import pandas as pd
from pydantic import BaseModel
from typing import Optional
from e2b import AsyncSandbox

from openreward.environments import Environment, JSONObject, Server, Split, TextBlock, ToolOutput, tool

class GSM8KTaskSpec(BaseModel):
    id: str
    question: str
    answer: str


class AnswerParams(BaseModel):
    answer: str


train_tasks = pd.read_parquet("train-00000-of-00001.parquet").to_dict(orient="records")
test_tasks = pd.read_parquet("test-00000-of-00001.parquet").to_dict(orient="records")

for i, task in enumerate(train_tasks):
    task['id'] = str(i)
for i, task in enumerate(test_tasks):
    task['id'] = str(i)


class GSM8KSandbox(Environment):
    """
    A GSM8K sandbox environment
    """
    def __init__(self, task_spec: JSONObject = {}, secrets: dict[str, str] = {}):
        super().__init__(task_spec)
        self.config = GSM8KTaskSpec.model_validate(task_spec)

    @classmethod
    def list_tasks(cls, split: str) -> list[JSONObject]:
        if split == "train":
            return train_tasks
        elif split == "test":
            return test_tasks
        raise ValueError(f"Unknown split: {split}")

    @classmethod
    def list_splits(cls):
        return [Split("train", type="train"), Split("test", type="test")]

    def get_prompt(self) -> str:
        return [TextBlock(type="text", text=self.config.question)]

    @tool
    def answer(self, params: AnswerParams) -> ToolOutput:
        """
        The answer tool can be used to submit your final answer. Note that this finishes the episode.
        """
        gold = parse(self.config.answer)
        answer = parse(params.answer)
        is_correct = verify(gold, answer)

        if is_correct:
            agent_message = "Correct!"
            reward = 1.0
        else:
            agent_message = "Wrong!"
            reward = 0.0

        return ToolOutput(
            blocks=[TextBlock(type="text", text=agent_message)],
            reward=reward,
            finished=True
        )

if __name__ == "__main__":
    Server([GSM8KSandbox]).run()
Install the math-verify requirement:
pip install math-verify
Now we are going to modify our environment by including an E2B sandbox. First, we’ll adjust our __init__:
def __init__(self, task_spec: JSONObject = {}, secrets: dict[str, str] = {}):
    super().__init__(task_spec)
    self.config = GSM8KTaskSpec.model_validate(task_spec)

    self.api_key = secrets.get("e2b_api_key")
    if not self.api_key:
        raise ValueError("E2B API key must be provided via secrets parameter")

    self.sandbox = None
Next, we’ll add a setup and teardown method:
async def setup(self) -> None:
    self.sandbox = await AsyncSandbox.create(
        api_key=self.api_key,
        timeout=60 * 60,  # 1 hour
    )

async def teardown(self) -> None:
    if self.sandbox is not None:
        await self.sandbox.kill()
These methods ensure that when a session begins, we create a sandbox, and when a session ends, we kill it. Lastly, we can add a tool that relies on code execution. We’ll add a bash tool. First we’ll add the Pydantic model:
class BashParams(BaseModel, extra="forbid"):
    command: str
    timeout: Optional[float] = 30.0
and then we’ll add the bash tool to the class:
@tool
async def bash(self, params: BashParams) -> ToolOutput:
    """Execute bash commands using the computer instance."""
    try:
        result = await self.sandbox.commands.run(params.command.strip(), timeout=params.timeout)
        result_dict = {
            "stdout": getattr(result, "stdout", ""),
            "stderr": getattr(result, "stderr", ""),
            "exit_code": getattr(result, "exit_code", None),
            "error": getattr(result, "error", None),
        }

        # What the model sees as tool text output
        text_out = result_dict["stdout"] or result_dict["stderr"] or str(result_dict)

        return ToolOutput(
            blocks=[TextBlock(type="text", text=text_out)],
            metadata={"output": result_dict},  # JSON-safe now
            reward=0.0,
            finished=False,
        )
    except Exception as e:
        return ToolOutput(
            metadata={"error": str(e)},
            blocks=[TextBlock(text=f"Error executing command: {str(e)}")],
            finished=False
        )
Now we can test this environment. First run the server as before:
python server.py
Now choose a model provider of your choice and sample from the environment:
1

Set your API key

Make sure you have an API key for OpenAI, and set the environment variable:
export OPENAI_API_KEY='your-openai-api-key-here'
export E2B_API_KEY='your-e2b-api-key-here'
2

Create your code

Save this as sample_agent.py:
from openai import OpenAI
from openreward import OpenReward
import os
import json

or_client = OpenReward()
oai_client = OpenAI()
MODEL_NAME = "gpt-5.2"

environment = or_client.environments.get(name="gsm8ksandbox", base_url="http://localhost:8080")
tasks = environment.list_tasks(split="train")
tools = environment.list_tools(format="openai")

example_task = tasks[0]

with environment.session(task=example_task, secrets={"e2b_api_key": os.getenv("E2B_API_KEY")}) as session:
    prompt = session.get_prompt()
    input_list = [{"role": "user", "content": prompt[0].text}]
    finished = False
    print(input_list)

    while not finished:
        response = oai_client.responses.create(
            model=MODEL_NAME,
            tools=tools,
            input=input_list
        )
        print(response.output)

        input_list += response.output

        for item in response.output:
            if item.type == "function_call":
                tool_result = session.call_tool(item.name, json.loads(str(item.arguments)))

                reward = tool_result.reward
                finished = tool_result.finished

                input_list.append({
                    "type": "function_call_output",
                    "call_id": item.call_id,
                    "output": json.dumps({
                        "result": tool_result.blocks[0].text
                    })
                })

                print(input_list[-1])

                if tool_result.finished:
                    finished = True
                    break
3

Run your code

  python sample_agent.py
Example output:
[{'role': 'user', 'content': 'Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May?\n\nUse the bash tool to execute commands.'}]
[ResponseFunctionToolCall(arguments='{"command":"python3 - << \'PY\'\\napril=48\\nmay=april/2\\nprint(april+may)\\nPY","timeout":100000}', call_id='call_TTS3bI1XFo0Gl88If18JZZ6l', name='bash', type='function_call', id='fc_04efdcdc3efb56c300699ada2d1fac81908551b7ab626fffd2', status='completed')]
{'type': 'function_call_output', 'call_id': 'call_TTS3bI1XFo0Gl88If18JZZ6l', 'output': '{"result": "72.0\\n"}'}
[ResponseOutputMessage(id='msg_04efdcdc3efb56c300699ada2f46d48190a6234d05fdc92310', content=[ResponseOutputText(annotations=[], text='Natalia sold 48 clips in April. In May she sold half of that: \\(48 \\div 2 = 24\\).\n\nAltogether, she sold \\(48 + 24 = 72\\) clips in April and May.', type='output_text', logprobs=[])], role='assistant', status='completed', type='message')]
[ResponseFunctionToolCall(arguments='{"answer":"Natalia sold 48 clips in April. In May she sold half of that: 48 ÷ 2 = 24. Altogether, she sold 48 + 24 = 72 clips."}', call_id='call_QPFUtfAviwYsozYSKadq78gZ', name='answer', type='function_call', id='fc_04efdcdc3efb56c300699ada30ddfc81909c7bd18e1398f223', status='completed')]
{'type': 'function_call_output', 'call_id': 'call_QPFUtfAviwYsozYSKadq78gZ', 'output': '{"result": "Correct!"}'}
Nice one! We have a working ORS environment. Now we’ll see how we can host the environment on OpenReward. The benefits of using OpenReward are:
  • Infrastructure: you do not have to set up infrastructure and compute to host the environment yourself. We take care of this and you are only charged based on your actual usage of the environment.
  • Discovery: your environment can be discovered and used by other users of the platform, helping drive adoption and attention to your work.

Host on OpenReward

Log into OpenReward, press the plus icon in the navbar and press New Environment: New environment button Next, fill in information about the environment and press Create Environment: New environment button You will be redirected to your new environment and will see setup instructions: Environment Setup

Upload environment files

We will need a way to use the train and test parquet files in our environment. We’ll upload these to the environment files: Click on the Files tab and upload each file: Environment Setup Files are mounted to the environment server at the /orwd_data directory. We’ll need to reference this folder in our server.py. Make the following change:
train_tasks = pd.read_parquet("/orwd_data/train-00000-of-00001.parquet").to_dict(orient="records")
test_tasks = pd.read_parquet("/orwd_data/test-00000-of-00001.parquet").to_dict(orient="records")
Note: you may want to set an environment variable instead of hardcoding like above so you can continue to test locally (without the /orwd_data prefix).

Write the Dockerfile and requirements

We’ll need a Dockerfile in our repository:
FROM python:3.11-slim

RUN apt update && apt upgrade -y && apt install -y \
    curl

WORKDIR /app

# Copy requirements and install dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

# Copy application code
COPY server.py .

# Expose port
EXPOSE 8000

# Start the server
CMD ["python", "server.py"]
We’ll need to update requirements.txt:
fastapi>=0.115.12
openreward
pandas
pyarrow
uvicorn>=0.34.3
math-verify[antlr4_13_2]

Push to GitHub and connect

Next, push your environment code to a GitHub repository. Once your GitHub repository is ready, go to your OpenReward environment and connect the repository: Connect GitHub You will be given a choice for how much compute you would like to allocate for it. We’ll use a low compute configuration since this is a simple environment. Press Connect GitHub and your first build will begin. To check the progress of the build, click the Deployments tab: Check Builds You can click on the latest build row to see logs: Check Builds The build logs show the progress of building the environment. The runtime logs show any calls to the environment server, and can be useful for diagnosing errors.

Sample from your environment

Now your environment is hosted on OpenReward, we can sample from it:
1

Set your API keys

Make sure you have API keys for OpenReward and OpenAI, and set these as environment variables:
export OPENAI_API_KEY='your-openai-api-key-here'
export OPENREWARD_API_KEY='your-openreward-api-key-here'
export E2B_API_KEY='your-e2b-api-key-here'
2

Create your code

Save this as quickstart.py:
  from openai import OpenAI
  from openreward import OpenReward
  import json
  import os

  or_client = OpenReward()
  oai_client = OpenAI()
  MODEL_NAME = "gpt-5.2"

  environment = or_client.environments.get(name="yourusername/gsm8k")
  tasks = environment.list_tasks(split="train")
  tools = environment.list_tools(format="openai")

  example_task = tasks[0]

  with environment.session(task=example_task, secrets={"e2b_api_key": os.getenv("E2B_API_KEY")}) as session:
      prompt = session.get_prompt()
      input_list = [{"role": "user", "content": prompt[0].text}]
      finished = False
      print(input_list)

      while not finished:
          response = oai_client.responses.create(
              model=MODEL_NAME,
              tools=tools,
              input=input_list
          )
          print(response.output)

          input_list += response.output

          for item in response.output:
              if item.type == "function_call":
                  tool_result = session.call_tool(item.name, json.loads(str(item.arguments)))

                  reward = tool_result.reward
                  finished = tool_result.finished

                  input_list.append({
                      "type": "function_call_output",
                      "call_id": item.call_id,
                      "output": json.dumps({
                          "result": tool_result.blocks[0].text
                      })
                  })

                  print(input_list[-1])

                  if tool_result.finished:
                      finished = True
                      break
3

Run your code

  python quickstart.py
Example output:
[{'role': 'user', 'content': 'Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May?\n\nUse the bash tool to execute commands.'}]
[ResponseFunctionToolCall(arguments='{"command":"python3 - << \'PY\'\\napril=48\\nmay=april/2\\nprint(april+may)\\nPY","timeout":100000}', call_id='call_TTS3bI1XFo0Gl88If18JZZ6l', name='bash', type='function_call', id='fc_04efdcdc3efb56c300699ada2d1fac81908551b7ab626fffd2', status='completed')]
{'type': 'function_call_output', 'call_id': 'call_TTS3bI1XFo0Gl88If18JZZ6l', 'output': '{"result": "72.0\\n"}'}
[ResponseOutputMessage(id='msg_04efdcdc3efb56c300699ada2f46d48190a6234d05fdc92310', content=[ResponseOutputText(annotations=[], text='Natalia sold 48 clips in April. In May she sold half of that: \\(48 \\div 2 = 24\\).\n\nAltogether, she sold \\(48 + 24 = 72\\) clips in April and May.', type='output_text', logprobs=[])], role='assistant', status='completed', type='message')]
[ResponseFunctionToolCall(arguments='{"answer":"Natalia sold 48 clips in April. In May she sold half of that: 48 ÷ 2 = 24. Altogether, she sold 48 + 24 = 72 clips."}', call_id='call_QPFUtfAviwYsozYSKadq78gZ', name='answer', type='function_call', id='fc_04efdcdc3efb56c300699ada30ddfc81909c7bd18e1398f223', status='completed')]
{'type': 'function_call_output', 'call_id': 'call_QPFUtfAviwYsozYSKadq78gZ', 'output': '{"result": "Correct!"}'}
Importantly, if you are publishing your environment, you should update your SDK example and let users know that your environment needs an E2B_API_KEY. Press Edit Settings near the SDK example then insert the code for environment variables: Setting environment secrets