Skip to content

Workspaces

Workspaces (also called IDEs) are interactive cloud development environments — JupyterLab, VSCode, or custom-image environments — running inside Kubernetes pods backed by GPU resources.

Access: All project members with sufficient project quota.


Workspace States

stateDiagram-v2
    [*] --> Stopped
    Stopped --> Starting : Click Start
    Starting --> Running : Pod Ready
    Starting --> Error : Pod Failed
    Running --> Stopping : Click Stop
    Stopping --> Stopped : Pod Terminated
    Running --> [*] : Click Delete
    Error --> Stopped : Reset / Retry
    Stopped --> [*] : Click Delete

Workspace List Page

Workspace list page Figure 1: Workspace list showing running and stopped workspaces with status badges.

The list page shows all your workspaces with:

  • Status badge — Running (green), Starting (yellow), Stopped (grey), Error (red)
  • GPU count — effective GPU units allocated through DRA
  • Project — which project the workspace belongs to
  • Open button — appears only when status is Running
  • Start / Stop / Delete action buttons

Launch a Workspace

sequenceDiagram
    participant User
    participant Dashboard
    participant API
    participant K8s

    User->>Dashboard: Fill launch form (image, GPU, storage)
    Dashboard->>API: POST /api/v1/ide
    API->>K8s: Create Pod + Service
    K8s-->>API: Pod scheduled (Starting)
    Note over K8s: Pod pulls image, starts runtime
    K8s-->>API: Pod Running
    API-->>Dashboard: Workspace URL
    Dashboard-->>User: "Open" button appears

Step-by-Step

  1. Click Workspaces in the left sidebar.
  2. Click + New Workspace (top-right).

New workspace form Figure 2: New workspace creation form with project, image, GPU, and storage fields.

  1. Fill in the form:
Field Description
Project Select which project this workspace belongs to (determines quota). The form defaults to your personal project when one is available.
Name Unique name for this workspace
Image Container image — choose from the project allowlist. The Root shell toggle is available for personal projects when the project allows root containers; group-owned projects require both the project and owning group to allow root containers.
Scheduling Queue Platform queue used for priority and preemption policy. Bound queues come from the project plan; the default queue is always available.
GPU Count Number of physical GPUs requested through DRA; fractional effective usage comes from SM percentage.
GPU Model One of the GPU models allowed by your plan (e.g., RTX 5090, RTX 6000 Pro). Models outside the plan allowlist are hidden.
SM Percentage Compute share for DRA GPU allocation — 100% is a full GPU; smaller values count as fractional effective GPU usage.
CPU / Memory Optional override; defaults to plan limits
Storage Select one or more PVCs to mount into the workspace
Mount Path Where each PVC is mounted inside the container (e.g., /data)
  1. Click Start.
  2. The workspace appears in Starting state. Wait 30–120 seconds for the pod to pull the image and start.
  3. When status changes to Running, click Open to open the IDE in a new browser tab.

Idle Reaper

Workspaces that are idle for an extended period may be automatically stopped by the platform's idle reaper to free GPU resources. You will see an idle warning banner on the workspace list page before this happens.

Root shell

The Root shell toggle runs the IDE process as UID 0 inside the container. It does not grant privileged pods, host namespaces, unrestricted hostPath access, Kubernetes RBAC, or extra Linux capabilities. If the toggle is disabled, ask an admin to enable root containers on the project and, for group-owned projects, the owning group.


Scheduling Queue and Priority

Each workspace chooses a platform scheduling queue. Queues carry a priority value, an optional preemptible flag, and an optional deserved GPU limit set by an admin. DRA GPU workspaces are scheduled by Kubernetes default-scheduler; the queue is still used by the platform for policy, labels, and preemption decisions.

Concept Effect on your workspace
Higher priority Scheduled before pods in lower-priority queues when GPUs are scarce.
Preemptible queue Your pod may be evicted to make room for a higher-priority workload. The platform sends a termination signal, then deletes the pod.
Default queue Always available, no priority guarantees. Use it when your project has no plan or the plan window is closed.

The launch form lists the queues bound to the project's plan, plus the default queue. If no plan is bound, only the default queue is selectable.


Plan Window

If a project plan defines a weekly or one-off schedule window, workspaces in plan-bound queues only run while the window is open.

  • The list page shows a live countdown — Plan window closes in HH:MM:SS — when the window is active.
  • When the window closes, pods running in plan-bound queues may be evicted by the platform's plan-window reaper. The default queue is unaffected.
  • A banner offers a one-click Switch to Default Queue action when the window has expired and you want to keep working at lower priority.

Connect to Your Workspace

Once a workspace is Running, click Open. The platform proxies your browser session to the pod — no port-forwarding needed.

  • JupyterLab: Opens at /lab — standard Jupyter interface
  • VSCode (code-server): Opens at / — full VS Code in the browser
  • Custom images: Opens at the configured proxy path

Persistent work

Code and notebooks saved inside a mounted PVC path (e.g., /data) persist when the workspace stops. Files written outside a PVC path are lost when the pod is deleted.


Stop vs. Delete

Action What Happens
Stop Pod is terminated; PVC data is preserved; workspace can be restarted
Delete Pod is permanently deleted; PVC data is preserved (PVC is not deleted); workspace cannot be restarted

Bulk Delete (Admin / IDE Manager)

Users with the IDE_MANAGE permission can select multiple workspaces and delete them in bulk using the Bulk Delete toolbar that appears when checkboxes are ticked.


GPU Allocation with DRA

GPU allocation uses Kubernetes Dynamic Resource Allocation (DRA). When you request a GPU model and SM percentage, the platform creates a ResourceClaim for the workspace.

  • GPU Count is the number of physical GPUs requested.
  • SM Percentage controls the effective share for quota accounting and GPU sharing.
  • Use nvidia-smi inside your workspace to check visible devices and runtime behavior.
  • A running GPU workspace is bound only when the Pod has both platform DRA labels and Kubernetes resourceClaims. Labels alone mean the workspace is visible to platform policy, but no GPU has been attached to the Pod.
  • For exclusive or model-specific access, select 100% SM and a model allowed by your plan, or ask an admin to adjust the plan.

Common Questions

The Open button doesn't appear even though status is Running.

The proxy service may take a few extra seconds to start. Wait 10 seconds and refresh the page. If the issue persists, the pod may be in a crash loop — check with your admin.

My workspace stopped automatically.

The idle reaper stops workspaces that haven't had active kernel/process activity for the configured idle period. Restart the workspace from the list page.

I see an error status. What do I do?

Click the workspace name to see the error detail. Common causes: image pull failure (image not in allowlist), insufficient quota, unavailable DRA device class, or node scheduling failure. Submit a Request if you need help.

Can I have multiple workspaces running at once?

Yes, as long as their total GPU consumption stays within your project's quota.

My workspace was evicted with reason 'Plan window expired'.

The plan-window reaper stops pods in plan-bound queues when the schedule closes. Restart it inside the next window, or switch the workspace to the default queue using the banner on the list page.

Why are some GPU models greyed out in the launch form?

The plan attached to your project specifies an allowed GPU model list. Models outside that list are hidden. Ask a project manager to bind a plan that includes the model you need.