openg2g.datacenter¶
openg2g.datacenter.base
¶
Abstract base class for datacenter backends and base state types.
DatacenterState
dataclass
¶
State emitted by a datacenter backend each timestep.
Contains only universally applicable fields. LLM-inference-specific
fields (batch sizes, replicas, latency) live on child classes like
LLMDatacenterState.
Attributes:
| Name | Type | Description |
|---|---|---|
time_s |
float
|
Simulation time in seconds. |
power_w |
ThreePhase
|
Three-phase power in watts. |
Source code in openg2g/datacenter/base.py
LLMDatacenterState
dataclass
¶
Bases: DatacenterState
State from a datacenter serving LLM workloads.
Extends DatacenterState with per-model batch
size, replica count, and observed inter-token latency fields used
by LLM controllers.
Attributes:
| Name | Type | Description |
|---|---|---|
batch_size_by_model |
dict[str, int]
|
Current batch size per model label. |
active_replicas_by_model |
dict[str, int]
|
Number of active replicas per model. |
observed_itl_s_by_model |
dict[str, float]
|
Observed average inter-token latency
(seconds) per model. |
Source code in openg2g/datacenter/base.py
DatacenterBackend
¶
Bases: Generic[DCStateT], ABC
Interface for datacenter power simulation backends.
Source code in openg2g/datacenter/base.py
58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 | |
dt_s
abstractmethod
property
¶
Native timestep as a Fraction (seconds).
state
property
¶
Latest emitted state.
Raises:
| Type | Description |
|---|---|
RuntimeError
|
If accessed before the first |
history(n=None)
¶
Return emitted state history (all, or latest n).
Source code in openg2g/datacenter/base.py
do_step(clock, events)
¶
Call step, record the state, and return it.
Called by the coordinator. Subclasses should not override this.
Source code in openg2g/datacenter/base.py
step(clock, events)
abstractmethod
¶
apply_control(command, events)
abstractmethod
¶
do_reset()
¶
Clear history and call reset.
Called by the coordinator. Subclasses should not override this.
reset()
abstractmethod
¶
Reset simulation state to initial conditions.
Called by the coordinator (via do_reset) before each
start. Must clear all simulation state: counters,
RNG seeds, cached values. Configuration (dt_s, models,
templates) is not affected. History is cleared automatically
by do_reset.
Abstract so every implementation explicitly enumerates its state. A forgotten field is a bug -- not clearing it silently corrupts the second run.
Source code in openg2g/datacenter/base.py
start()
¶
Acquire per-run resources (threads, solver circuits).
Called after reset, before the simulation loop.
Override for backends that need resource acquisition (e.g.,
OpenDSSGrid compiles its
DSS circuit here). No-op by default because most offline
components have no resources to acquire.
Source code in openg2g/datacenter/base.py
LLMBatchSizeControlledDatacenter
¶
Bases: DatacenterBackend[DCStateT]
Datacenter that serves LLM workloads and supports batch-size control.
Marker layer between DatacenterBackend and
concrete implementations. Controllers that issue
SetBatchSize commands or read
active_replicas_by_model / observed_itl_s_by_model
from state should bind their generic to this class.
Source code in openg2g/datacenter/base.py
phase_share_by_model
property
¶
Per-model phase share vectors [frac_A, frac_B, frac_C].
Returns an empty dict by default. Consumers treat missing keys
as uniform [1/3, 1/3, 1/3]. Override in subclasses that know
actual server-to-phase placement.
openg2g.datacenter.command
¶
Command types targeting datacenter backends.
DatacenterCommand
¶
Base for commands targeting the datacenter backend.
Subclass this for each concrete datacenter command kind. The coordinator routes commands to backends based on this type hierarchy.
Source code in openg2g/datacenter/command.py
SetBatchSize
dataclass
¶
Bases: DatacenterCommand
Set batch sizes for one or more models.
Attributes:
| Name | Type | Description |
|---|---|---|
batch_size_by_model |
dict[str, int]
|
Mapping of model label to target batch size. |
ramp_up_rate_by_model |
dict[str, float]
|
Per-model requests/second ramp-up rate. Models not present get immediate changes (rate 0). |
target_site_id |
str | None
|
Site this command targets. The coordinator uses this to route the command to the correct datacenter. |
Source code in openg2g/datacenter/command.py
ShiftReplicas
dataclass
¶
Bases: DatacenterCommand
Shift replicas for a model at this datacenter.
Positive replica_delta adds replicas (receiving site);
negative removes them (sending site).
Attributes:
| Name | Type | Description |
|---|---|---|
model_label |
str
|
Which model to shift. |
replica_delta |
int
|
Number of replicas to add (>0) or remove (<0). |
target_site_id |
str | None
|
Site this command targets. The coordinator uses this to route the command to the correct datacenter. |
Source code in openg2g/datacenter/command.py
openg2g.datacenter.config
¶
Datacenter facility and workload configuration.
InferenceModelSpec
¶
Bases: BaseModel
Specification for one LLM model served in the datacenter.
This is a pure model-identity object describing what is served, not
how many or at what batch size. Deployment-specific parameters
(replica count, initial batch size) are specified via
:class:ModelDeployment.
Attributes:
| Name | Type | Description |
|---|---|---|
model_label |
str
|
Human-readable model identifier (e.g. |
model_id |
str
|
HuggingFace model ID (e.g. |
gpus_per_replica |
int
|
GPUs allocated to each replica (determines model parallelism and per-replica power draw). |
itl_deadline_s |
float
|
Per-model inter-token latency deadline for the OFO latency dual (seconds). |
feasible_batch_sizes |
tuple[int, ...]
|
Allowed batch sizes. Used by the OFO controller for discretizing continuous batch-size updates and by the online datacenter for load-generator sizing. |
Source code in openg2g/datacenter/config.py
ModelDeployment
dataclass
¶
One model's deployment at a datacenter site.
Pairs an :class:InferenceModelSpec (model identity) with
deployment-specific parameters.
Attributes:
| Name | Type | Description |
|---|---|---|
spec |
InferenceModelSpec
|
The model specification. |
num_replicas |
int
|
Number of replicas deployed at this site. |
initial_batch_size |
int
|
Starting batch size for this deployment.
Must be in |
Source code in openg2g/datacenter/config.py
TrainingRun
¶
Training workload parameters.
The trace is eagerly rescaled so its peak matches target_peak_W_per_gpu.
Use eval_power to evaluate total training power at a given simulation time.
Combine with at and | to build a TrainingSchedule:
schedule = (
TrainingRun(n_gpus=2400, trace=trace_a).at(t_start=1000, t_end=2000)
| TrainingRun(n_gpus=1200, trace=trace_b).at(t_start=2500, t_end=3500)
)
Attributes:
| Name | Type | Description |
|---|---|---|
n_gpus |
Number of GPUs running the training workload. |
|
trace |
Single-GPU |
|
target_peak_W_per_gpu |
The trace is rescaled so its peak equals this value. |
Source code in openg2g/datacenter/config.py
eval_power(t, t_start, t_end)
¶
Evaluate total training power at simulation time t.
Returns zero if t is outside [t_start, t_end].
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
t
|
float
|
Global simulation time (seconds). |
required |
t_start
|
float
|
Time when training becomes active (seconds). |
required |
t_end
|
float
|
Time when training stops (seconds). |
required |
Returns:
| Type | Description |
|---|---|
float
|
Total training power (W) across all |
Source code in openg2g/datacenter/config.py
at(t_start, t_end)
¶
Schedule this training run over [t_start, t_end].
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
t_start
|
float
|
Global simulation time when training becomes active (seconds). |
required |
t_end
|
float
|
Global simulation time when training stops (seconds). |
required |
Returns:
| Type | Description |
|---|---|
TrainingSchedule
|
A single-entry |
Source code in openg2g/datacenter/config.py
TrainingSchedule
¶
Ordered collection of TrainingRun objects scheduled
over time windows.
Each entry is a (TrainingRun, t_start, t_end) tuple. Entries are
sorted by t_start.
Built with TrainingRun.at and |.
Example:
schedule = (
TrainingRun(n_gpus=2400, trace=trace_a).at(t_start=1000, t_end=2000)
| TrainingRun(n_gpus=1200, trace=trace_b).at(t_start=2500, t_end=3500)
)
Source code in openg2g/datacenter/config.py
InferenceRamp
dataclass
¶
Inference server ramp parameters.
Transitions the active replica count for a specific model to target.
Combine with at and | to build an
InferenceRampSchedule:
ramps = (
InferenceRamp(target=144, model="Llama-3.1-8B").at(t_start=2500, t_end=3000)
| InferenceRamp(target=864, model="Llama-3.1-8B").at(t_start=3200, t_end=3400)
)
Attributes:
| Name | Type | Description |
|---|---|---|
target |
int
|
Target number of active replicas after the ramp completes. |
model |
str
|
Model label this ramp applies to. |
Source code in openg2g/datacenter/config.py
at(t_start, t_end)
¶
Schedule this ramp over [t_start, t_end].
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
t_start
|
float
|
Global simulation time when the ramp begins (seconds). |
required |
t_end
|
float
|
Global simulation time when the ramp ends (seconds). |
required |
Returns:
| Type | Description |
|---|---|
InferenceRampSchedule
|
A single-entry |
Source code in openg2g/datacenter/config.py
InferenceRampSchedule
¶
Ordered collection of InferenceRamp events for
a single model.
Each entry is an (InferenceRamp, t_start, t_end) tuple. Entries are
sorted by t_start.
Built with InferenceRamp.at and |.
Semantics: before the first ramp, the active count equals
initial_count. During each [t_start, t_end] window the count
linearly interpolates from the previous level to target. Between
ramps, the count holds at the last target.
An empty schedule means initial_count replicas are active at all
times.
Example:
ramps = (
InferenceRamp(target=144, model="Llama-3.1-8B").at(t_start=2500, t_end=3000)
| InferenceRamp(target=720, model="Llama-3.1-8B").at(t_start=3200, t_end=3400)
)
Source code in openg2g/datacenter/config.py
245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 | |
initial_count
property
¶
Replica count before any ramp event.
for_model(model_label, *, initial_count=None)
¶
Return a schedule containing only entries for model_label.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model_label
|
str
|
Model to filter for. |
required |
initial_count
|
int | None
|
Override the initial replica count for this
per-model schedule. If |
None
|
Source code in openg2g/datacenter/config.py
max_count()
¶
Return the maximum target across all entries, or initial_count if empty.
count_at(t)
¶
Evaluate the active replica count at time(s) t.
Piecewise-linear interpolation between ramp events.
Before the first ramp, returns initial_count.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
t
|
float | ndarray
|
Scalar or array of global simulation times (seconds). |
required |
Returns:
| Type | Description |
|---|---|
float | ndarray
|
Active replica count(s), same shape as t. |
Source code in openg2g/datacenter/config.py
DatacenterConfig
¶
Bases: BaseModel
Physical datacenter facility configuration.
Attributes:
| Name | Type | Description |
|---|---|---|
gpus_per_server |
int
|
Number of GPUs per physical server rack. |
base_kw_per_phase |
float
|
Constant base load per phase (kW). |
power_factor |
float
|
Power factor of the datacenter loads (lagging). |
Source code in openg2g/datacenter/config.py
PowerAugmentationConfig
¶
Bases: BaseModel
Power augmentation settings for virtual server scaling.
Controls per-server amplitude jitter and additive noise applied during power augmentation.
Attributes:
| Name | Type | Description |
|---|---|---|
amplitude_scale_range |
tuple[float, float]
|
|
noise_fraction |
float
|
Gaussian noise standard deviation as a fraction of per-server power. |
Source code in openg2g/datacenter/config.py
openg2g.datacenter.layout
¶
Server layout and activation policy primitives.
Provides the topology and activation-policy building blocks used by
datacenter backends. Power augmentation (scaling per-GPU power to
three-phase datacenter power) lives in
openg2g.datacenter.workloads.inference.
ActivationPolicy
¶
Bases: ABC
Per-model activation policy that answers "which servers are active?"
Subclass to implement custom activation logic. The datacenter creates
one policy per model and passes it to
InferencePowerAugmenter.
Source code in openg2g/datacenter/layout.py
active_mask(t)
abstractmethod
¶
Boolean mask of active servers at time t.
Returns:
| Type | Description |
|---|---|
ndarray
|
Array of shape |
active_indices(t)
¶
Indices of active servers at time t.
The default implementation returns indices in ascending order
via np.where(active_mask(t)). Subclasses
may override to return
indices in a specific order (e.g., priority order) to control
floating-point summation order in the datacenter.
Returns:
| Type | Description |
|---|---|
ndarray
|
1-D int array of active server indices. |
Source code in openg2g/datacenter/layout.py
RampActivationPolicy
¶
Bases: ActivationPolicy
Activate servers by fixed random priority, following an
InferenceRampSchedule.
At time t, the top-k servers (by random priority) are active, where k is derived from the schedule's absolute replica count and the model's GPU requirements.
This is the default policy used by
OfflineDatacenter.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
schedule
|
InferenceRampSchedule
|
Per-model ramp schedule (absolute replica counts). |
required |
num_servers
|
int
|
Total allocated servers (may exceed baseline when ramp targets exceed the initial replica count). |
required |
rng
|
Generator
|
RNG for randomizing priority ordering. Consumed once at construction time. |
required |
gpus_per_replica
|
int
|
GPUs required per model replica. |
required |
gpus_per_server
|
int
|
GPUs per physical server. |
required |
Source code in openg2g/datacenter/layout.py
active_indices(t)
¶
Return active server indices in priority order.
ServerLayout
dataclass
¶
Per-model server layout describing how GPUs are organized.
This describes the physical topology only. Activation policies (which
servers are on/off at a given time) are managed separately by the
datacenter and passed to
InferencePowerAugmenter
alongside layouts.
Attributes:
| Name | Type | Description |
|---|---|---|
num_servers |
int
|
Number of physical servers for this model. |
total_gpus |
int
|
Total GPU count across all servers. |
gpus_per_replica |
int
|
GPUs per model replica. |
gpus_per_server_list |
ndarray
|
GPU count per server (last may be partial). |
phase_list |
ndarray
|
Phase assignment per server (0=A, 1=B, 2=C). |
stagger_offsets |
ndarray
|
Per-server offsets for desynchronization. In offline mode these are integer indices into a power template; in online mode they can be float time offsets into a rolling buffer. |
amplitude_scales |
ndarray
|
Per-server power multiplier for inter-server variation. |
noise_fraction |
float
|
Gaussian noise standard deviation as a fraction of per-server power. |
Source code in openg2g/datacenter/layout.py
openg2g.datacenter.offline
¶
Offline (trace-based) datacenter backend.
OfflineDatacenterState
dataclass
¶
Bases: LLMDatacenterState
Extended state from the offline (trace-based) backend.
Adds per-model power breakdown to
LLMDatacenterState.
Source code in openg2g/datacenter/offline.py
OfflineWorkload
dataclass
¶
Complete offline simulation workload.
Bundles inference data with replica counts, optional training overlays, and inference server ramp events.
Attributes:
| Name | Type | Description |
|---|---|---|
inference_data |
InferenceData
|
LLM inference workload with offline simulation data (model specs, power templates, ITL fits). |
replica_counts |
dict[str, int]
|
Mapping of model label to initial replica count at this site. |
inference_ramps |
InferenceRampSchedule
|
Inference server ramp schedule. An empty schedule keeps all servers active at their initial replica counts. |
training |
TrainingSchedule
|
Training workload schedule. An empty schedule disables training overlay. |
Source code in openg2g/datacenter/offline.py
OfflineDatacenter
¶
Bases: LLMBatchSizeControlledDatacenter[OfflineDatacenterState]
Trace-based datacenter simulation with step-by-step interface.
Each step call computes one timestep of power output by indexing
into pre-built per-GPU templates, applying per-server amplitude
scaling and noise, and summing across active servers per phase.
Batch size changes via apply_control take effect on the next
step call.
If workload.inference_ramps is set, a
RampActivationPolicy
is created per model.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
datacenter
|
DatacenterConfig
|
Facility configuration (GPUs per server, base load). |
required |
workload
|
OfflineWorkload
|
Offline workload configuration bundling inference data, training overlays, and server ramp events. |
required |
dt_s
|
Fraction
|
Simulation timestep (seconds). |
required |
seed
|
int
|
Random seed for layout generation, noise, and latency sampling. Sub-seeds are derived deterministically. |
0
|
power_augmentation
|
PowerAugmentationConfig | None
|
Per-server amplitude scaling and noise settings. |
None
|
Source code in openg2g/datacenter/offline.py
71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 | |
total_gpu_capacity
property
¶
Maximum number of GPUs this datacenter can host.
phase_share_by_model
property
¶
Per-model phase share vectors derived from server placement.
Returns:
| Type | Description |
|---|---|
dict[str, ndarray]
|
Mapping of model label to a 3-element array |
current_gpu_usage()
¶
Current total GPU usage across all models (initial + offsets).
Source code in openg2g/datacenter/offline.py
available_gpu_capacity()
¶
apply_control(command, events)
¶
Apply a control command. Dispatches on command type.
Source code in openg2g/datacenter/offline.py
apply_control_set_batch_size(command, events)
¶
Record new batch sizes. Changes take effect on the next step.
Source code in openg2g/datacenter/offline.py
apply_control_shift_replicas(command, events)
¶
Shift replicas for a model by adjusting the activation policy base count.
Source code in openg2g/datacenter/offline.py
openg2g.datacenter.online
¶
Online (live GPU) datacenter backend with power augmentation.
Connects to real vLLM inference servers for load generation and ITL
measurement, and to zeusd instances for live GPU power monitoring.
Power readings from a small number of real GPUs are augmented to
datacenter scale using the shared
InferencePowerAugmenter
pipeline.
Requires pip install zeus aiohttp.
STAGGER_BUFFER_S = 10.0
module-attribute
¶
Seconds of power history for temporal staggering.
Also used as the stagger range when building
ServerLayout
(float offsets drawn from [0, STAGGER_BUFFER_S)).
Not user-configurable. Patchable for testing via
openg2g.datacenter.online.STAGGER_BUFFER_S = ....
OnlineDatacenterState
dataclass
¶
Bases: LLMDatacenterState
Extended state from the online (live GPU) backend.
The base power_w
field carries the augmented three-phase power (what the grid sees).
This subclass adds the measured (pre-augmentation) breakdown for
post-hoc analysis.
Attributes:
| Name | Type | Description |
|---|---|---|
measured_power_w |
ThreePhase
|
Total measured three-phase power from real GPUs (before augmentation), plus base load. |
measured_power_w_by_model |
dict[str, float]
|
Per-model total measured power from real GPUs (watts). |
augmented_power_w_by_model |
dict[str, float]
|
Per-model augmented power (watts). This is the power fed to the grid for each model after scaling up. |
augmentation_factor_by_model |
dict[str, float]
|
Per-model augmentation multiplier (virtual replicas / real replicas). |
prometheus_metrics_by_model |
dict[str, dict[str, float]]
|
Per-model Prometheus metrics snapshot.
Keys are model labels, values are dicts with metric names like
|
Source code in openg2g/datacenter/online.py
GPUEndpointMapping
¶
Bases: BaseModel
Maps a zeusd endpoint to specific GPUs.
Attributes:
| Name | Type | Description |
|---|---|---|
host |
str
|
Hostname or IP of the zeusd instance. |
port |
int
|
TCP port of the zeusd instance. |
gpu_indices |
tuple[int, ...]
|
GPU device indices to monitor on this endpoint. |
Source code in openg2g/datacenter/online.py
endpoint_key
property
¶
Return the host:port key used by PowerStreamingClient.
VLLMDeployment
¶
Bases: BaseModel
Deployment of one LLM model on a vLLM server.
Warning
vLLM must be a patched version with the POST /set_max_num_seqs
endpoint implemented.
Pairs a reusable
InferenceModelSpec
with physical deployment details. simulated_num_replicas is the
augmented replica count for grid simulation. The real replica
count is derived from gpu_endpoints and spec.gpus_per_replica.
Tracks the current batch size (max_num_seqs) and provides
set_batch_size() to update it on the vLLM server.
Attributes:
| Name | Type | Description |
|---|---|---|
spec |
InferenceModelSpec
|
Model specification (shared with offline datacenter). |
simulated_num_replicas |
int
|
Number of replicas to simulate for grid power augmentation. Must be specified explicitly. |
vllm_base_url |
str
|
Base URL of the vLLM server (e.g. |
gpu_endpoints |
tuple[GPUEndpointMapping, ...]
|
GPU endpoint mappings for power monitoring. |
request_extra_body |
dict[str, Any] | None
|
Extra fields merged into every request dict
for this model (e.g. |
initial_batch_size |
int
|
Starting batch size. The |
batch_size |
int
|
Current batch size ( |
Source code in openg2g/datacenter/online.py
115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 | |
num_real_gpus
property
¶
Total number of real GPUs for this model across all endpoints.
num_real_replicas
property
¶
Number of real replicas (real GPUs / GPUs per replica).
augmentation_factor
property
¶
Ratio of simulated replicas to real replicas.
set_batch_size(batch_size, ramp_up_rate=0.0)
¶
Update batch size on the vLLM server and track it locally.
Sends POST /set_max_num_seqs to the vLLM server.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
batch_size
|
int
|
New batch size (max_num_seqs) to set. |
required |
ramp_up_rate
|
float
|
Optional ramp-up rate for gradual increase. |
0.0
|
Source code in openg2g/datacenter/online.py
LiveServerConfig
¶
Bases: BaseModel
Configuration for interacting with live vLLM servers.
Groups settings related to load generation, ITL measurement, and Prometheus monitoring. The online counterpart of offline's trace/template data.
Attributes:
| Name | Type | Description |
|---|---|---|
requests_dir |
Path | None
|
Directory containing per-model JSONL request files
(e.g. |
prometheus_poll_interval_s |
float
|
How often to poll vLLM /metrics for request counts and saturation monitoring. Set to 0 to disable. |
max_output_tokens |
int
|
Token limit for generated load requests (used by the fallback request when no JSONL requests are provided). |
itl_window_s |
float
|
Sliding window for ITL averaging (seconds). |
Source code in openg2g/datacenter/online.py
OnlineDatacenter
¶
Bases: LLMBatchSizeControlledDatacenter[OnlineDatacenterState]
Live GPU datacenter backend with power augmentation.
Dispatches inference load to vLLM servers, streams GPU power from
zeusd, measures ITL from streaming responses, and augments power
readings to datacenter scale using the shared
InferencePowerAugmenter
pipeline (same as
OfflineDatacenter).
Call start before the first step and
stop after the simulation loop finishes.
PowerStreamingClient is constructed internally from the GPU
endpoints declared in each deployment. Health checks are always
performed during start.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
datacenter
|
DatacenterConfig
|
Facility configuration (GPUs per server, base load). |
required |
deployments
|
Sequence[VLLMDeployment]
|
Model deployments with physical hardware mapping. |
required |
dt_s
|
Fraction
|
Simulation timestep (seconds). |
Fraction(1, 10)
|
seed
|
int
|
Random seed for layout generation and noise. |
0
|
power_augmentation
|
PowerAugmentationConfig | None
|
Per-server amplitude scaling and noise settings. |
None
|
inference_ramps
|
InferenceRampSchedule | None
|
Inference server ramp event(s). |
None
|
live_server
|
LiveServerConfig | None
|
Configuration for interacting with live vLLM
servers. Request data is loaded from
|
None
|
Source code in openg2g/datacenter/online.py
675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 | |
phase_share_by_model
property
¶
Per-model phase share vectors derived from server layout.
start()
¶
Start load generation, warm up servers, and fill the power buffer.
Sequence
- Run health checks on all vLLM servers and zeusd instances.
- Wait for at least one power reading per endpoint (10 s timeout).
- Set initial batch sizes on all vLLM servers.
- Start load generation threads.
- Warm up: poll power into the rolling buffer while waiting for
each model's
num_requests_runningto reach 95% of itsinitial_batch_size. Fails after 60 s if any model does not saturate.
Source code in openg2g/datacenter/online.py
stop()
¶
step(clock, events)
¶
Read live power, augment to datacenter scale, and return state.
Source code in openg2g/datacenter/online.py
apply_control(command, events)
¶
Apply a control command. Dispatches on command type.
Source code in openg2g/datacenter/online.py
apply_control_set_batch_size(command, events)
¶
Apply batch size command by sending HTTP requests to vLLM servers.
Source code in openg2g/datacenter/online.py
openg2g.datacenter.workloads.inference
¶
Inference workload: power traces, templates, ITL fits, and augmentation.
MLEnergySource
¶
Bases: BaseModel
Per-model ML.ENERGY benchmark data extraction settings.
Attributes:
| Name | Type | Description |
|---|---|---|
model_label |
str
|
Simulation label for the model. |
task |
str
|
Benchmark task name (e.g. |
gpu |
str
|
GPU model name (e.g. |
batch_sizes |
tuple[int, ...]
|
Batch sizes to extract from the benchmark data. |
fit_exclude_batch_sizes |
tuple[int, ...]
|
Batch sizes to exclude from logistic curve fitting (but still included in trace extraction). |
Source code in openg2g/datacenter/workloads/inference.py
InferenceTrace
dataclass
¶
A single power trace measurement.
Attributes:
| Name | Type | Description |
|---|---|---|
t_s |
ndarray
|
Time vector (seconds), monotonically increasing. |
power_w |
ndarray
|
Total power vector (watts) across all measured GPUs,
same length as |
measured_gpus |
int
|
Number of GPUs used in the measurement. |
Source code in openg2g/datacenter/workloads/inference.py
ITLFitStore
¶
Per-model, per-batch-size ITL mixture distributions.
Indexed by (model_label, batch_size). Provides:
load: load fits from a CSV produced by the data pipelinedistributions: access as a nested dictsample_avg: sample a fleet-average ITL value
Attributes:
| Name | Type | Description |
|---|---|---|
COL_MODEL_LABEL |
Column name for model label in the CSV. |
|
COL_BATCH_SIZE |
Column name for batch size in the CSV. |
Source code in openg2g/datacenter/workloads/inference.py
117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 | |
distributions
property
¶
Nested dict: model_label -> batch_size -> ITLMixtureModel.
sample_avg(model_label, batch_size, n_replicas, rng)
¶
Sample a fleet-average ITL for the given model and batch size.
Uses ITLMixtureModel.sample_avg under the hood, with the
approx_sampling_thresh set at construction time.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model_label
|
str
|
Model label string. |
required |
batch_size
|
int
|
Current batch size. |
required |
n_replicas
|
int
|
Number of active replicas. |
required |
rng
|
Generator
|
NumPy random generator for sampling. |
required |
Returns:
| Type | Description |
|---|---|
float
|
Fleet-average ITL in seconds. |
Raises:
| Type | Description |
|---|---|
KeyError
|
If model or batch size is not in the store. |
Source code in openg2g/datacenter/workloads/inference.py
load(csv_path, approx_sampling_thresh=30)
classmethod
¶
Load ITL mixture fits from a CSV.
Expected columns: model_label, max_num_seqs, plus the
itl_mix_* parameter columns produced by
ITLMixtureModel.to_dict().
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
csv_path
|
Path | str
|
Path to the latency fits CSV. |
required |
approx_sampling_thresh
|
int
|
Replica count above which sampling uses a CLT normal approximation instead of drawing individual samples. |
30
|
Source code in openg2g/datacenter/workloads/inference.py
save(csv_path)
¶
Save ITL mixture fits to a CSV.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
csv_path
|
Path
|
Output CSV path. |
required |
Source code in openg2g/datacenter/workloads/inference.py
InferenceTemplateStore
¶
Pre-built per-GPU power templates for a specific simulation config.
Created by InferenceTraceStore.build_templates.
Use template to look up a template by model label and batch size.
Source code in openg2g/datacenter/workloads/inference.py
InferenceTraceStore
¶
Manages raw power traces loaded from CSV files.
Indexed by (model_label, batch_size). Provides:
load: load traces discovered via a manifest CSVbuild_templates: build per-GPU power templates for a specific simulation config, returning aInferenceTemplateStore
Source code in openg2g/datacenter/workloads/inference.py
273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 | |
load(manifest)
classmethod
¶
Load traces discovered via a manifest CSV.
Trace file paths in the manifest are resolved relative to the manifest file's parent directory.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
manifest
|
Path
|
Path to the manifest CSV (e.g. |
required |
Source code in openg2g/datacenter/workloads/inference.py
build_templates(*, duration_s, dt_s, steady_skip_s=0.0)
¶
Build per-GPU power templates for all traces.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
duration_s
|
Fraction | float
|
Total simulation duration (seconds). |
required |
dt_s
|
Fraction | float
|
Simulation timestep (seconds). |
required |
steady_skip_s
|
float
|
Skip this many seconds from the start of each trace to avoid warm-up transients. |
0.0
|
Returns:
| Type | Description |
|---|---|
InferenceTemplateStore
|
A |
Source code in openg2g/datacenter/workloads/inference.py
save(out_dir)
¶
Save traces and manifest CSV to a directory.
Writes individual trace CSVs to out_dir/traces/ and a manifest
CSV at out_dir/traces_summary.csv.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
out_dir
|
Path
|
Output directory. |
required |
Source code in openg2g/datacenter/workloads/inference.py
InferenceData
¶
LLM inference workload with offline simulation data.
Bundles model specifications with power templates and latency distributions. Validates that all models have matching data entries.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
models
|
tuple[InferenceModelSpec, ...]
|
Model specifications as a tuple of
|
required |
power_templates
|
InferenceTemplateStore
|
Pre-built per-GPU power templates for all models
and batch sizes, created via
|
required |
itl_fits
|
ITLFitStore | None
|
Per-model ITL mixture distributions. Required when using
controllers that read observed latency (e.g.,
|
None
|
Source code in openg2g/datacenter/workloads/inference.py
419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 | |
models
property
¶
The model specifications.
filter_models(models)
¶
Return a new InferenceData containing only the specified models.
Source code in openg2g/datacenter/workloads/inference.py
generate(models, data_sources, *, runs=None, mlenergy_data_dir=None, dt_s=0.1, seed=0, itl_sample_cap=2048)
classmethod
¶
Generate inference data from ML.ENERGY benchmark data.
Produces power traces and ITL mixture fits for all models and
batch sizes specified in data_sources.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
models
|
tuple[InferenceModelSpec, ...]
|
Model specifications. |
required |
data_sources
|
dict[str, MLEnergySource]
|
Per-model benchmark data extraction settings,
keyed by |
required |
runs
|
Any
|
Pre-loaded |
None
|
mlenergy_data_dir
|
Path | None
|
Path to compiled mlenergy-data directory.
Ignored if |
None
|
dt_s
|
float
|
Trace timestep (seconds). |
0.1
|
seed
|
int
|
Random seed for ITL fitting. |
0
|
itl_sample_cap
|
int
|
Maximum ITL samples per run for fitting. |
2048
|
Returns:
| Type | Description |
|---|---|
InferenceData
|
A new |
InferenceData
|
templates — call |
InferenceData
|
saved/loaded store to get templates). |
Source code in openg2g/datacenter/workloads/inference.py
497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 | |
save(out_dir, *, plot=False)
¶
Save traces and ITL fits to a directory.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
out_dir
|
Path
|
Output directory. |
required |
plot
|
bool
|
If |
False
|
Source code in openg2g/datacenter/workloads/inference.py
load(data_dir, models, *, duration_s=600.0, dt_s=0.1, steady_skip_s=0.0)
classmethod
¶
Load from a generated data directory.
Loads traces from traces_summary.csv, builds templates, and
loads ITL fits from latency_fits.csv.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
data_dir
|
Path
|
Directory containing generated data. |
required |
models
|
tuple[InferenceModelSpec, ...]
|
Model specifications. |
required |
duration_s
|
float
|
Simulation duration for template building. |
600.0
|
dt_s
|
float
|
Simulation timestep for template building. |
0.1
|
steady_skip_s
|
float
|
Skip seconds for template building. |
0.0
|
Source code in openg2g/datacenter/workloads/inference.py
ensure(data_dir, models, data_sources=None, *, mlenergy_data_dir=None, plot=False, duration_s=600.0, dt_s=0.1, steady_skip_s=0.0)
classmethod
¶
Load from data_dir, generating first if needed.
If data_dir/traces_summary.csv does not exist, generates
inference data from ML.ENERGY benchmark data and saves it.
Then loads and returns.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
data_dir
|
Path
|
Data directory (generated files go here). |
required |
models
|
tuple[InferenceModelSpec, ...]
|
Model specifications. |
required |
data_sources
|
dict[str, MLEnergySource] | None
|
Per-model benchmark data extraction settings,
keyed by |
None
|
mlenergy_data_dir
|
Path | None
|
Path to compiled mlenergy-data directory. |
None
|
plot
|
bool
|
If |
False
|
duration_s
|
float
|
Simulation duration for template building. |
600.0
|
dt_s
|
float
|
Simulation timestep for template building. |
0.1
|
steady_skip_s
|
float
|
Skip seconds for template building. |
0.0
|
Source code in openg2g/datacenter/workloads/inference.py
InferenceAugmentedPower
dataclass
¶
Result of inference power augmentation for one simulation timestep.
Attributes:
| Name | Type | Description |
|---|---|---|
power_w |
ThreePhase
|
Three-phase inference power (watts), excluding base load. |
power_by_model_w |
dict[str, float]
|
Per-model total active power (watts). |
active_replicas_by_model |
dict[str, int]
|
Per-model active replica count. |
Source code in openg2g/datacenter/workloads/inference.py
InferencePowerAugmenter
¶
Scales per-GPU inference power through server layouts to three-phase power.
Given per-GPU power values for each server (one value per server per model), applies per-server scaling, noise, activation masking, and phase summation to produce inference-level three-phase power.
This class is backend-agnostic. The offline datacenter feeds it template-indexed values; the online datacenter can feed it live-measured values. The datacenter backend is responsible for adding facility base load on top of the returned inference power.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
layouts
|
dict[str, ServerLayout]
|
Per-model server layouts (physical topology). |
required |
policies
|
dict[str, ActivationPolicy]
|
Per-model activation policies determining which servers are active at each timestep. |
required |
seed
|
int
|
Random seed for noise RNG. |
0
|
Source code in openg2g/datacenter/workloads/inference.py
868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 | |
augment(per_gpu_by_model, t)
¶
Augment per-server per-GPU power to three-phase power.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
per_gpu_by_model
|
dict[str, ndarray]
|
Mapping of model label to per-GPU power
array of shape |
required |
t
|
float
|
Current simulation time (seconds). |
required |
Returns:
| Type | Description |
|---|---|
InferenceAugmentedPower
|
Augmented inference power with three-phase totals, per-model power, and per-model active replica counts. |
Source code in openg2g/datacenter/workloads/inference.py
RequestsConfig
¶
Bases: BaseModel
Configuration for building per-model JSONL request files.
Attributes:
| Name | Type | Description |
|---|---|---|
dataset |
str
|
Dataset to sample prompts from ( |
num_requests |
int
|
Number of requests to sample per model. |
max_completion_tokens |
int
|
Maximum output tokens per request. |
seed |
int
|
Random seed for dataset shuffling and oversampling. |
system_prompt |
str
|
System prompt prepended to every request. |
Source code in openg2g/datacenter/workloads/inference.py
RequestStore
¶
Per-model request dicts for online load generation.
Each model's requests are stored as a list of OpenAI Chat Completion streaming request dicts, suitable for submission to a vLLM server.
Attributes:
| Name | Type | Description |
|---|---|---|
requests_by_model |
Mapping from model label to request dicts. |
Source code in openg2g/datacenter/workloads/inference.py
1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 | |
generate(models, config=None, *, extra_body_by_model=None)
classmethod
¶
Sample prompts and build per-model request dicts.
Requires pip install datasets openai.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
models
|
Sequence[InferenceModelSpec]
|
Model specifications. Uses |
required |
config
|
RequestsConfig | None
|
Request generation config. Uses defaults if |
None
|
extra_body_by_model
|
dict[str, dict] | None
|
Optional per-model extra fields merged
into every request dict (e.g. |
None
|
Source code in openg2g/datacenter/workloads/inference.py
1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 | |
save(out_dir)
¶
Write per-model JSONL files to out_dir.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
out_dir
|
Path
|
Output directory. Created if it doesn't exist. |
required |
Source code in openg2g/datacenter/workloads/inference.py
load(out_dir)
classmethod
¶
Load per-model JSONL files from out_dir.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
out_dir
|
Path
|
Directory containing |
required |
Source code in openg2g/datacenter/workloads/inference.py
ensure(out_dir, models=None, config=None, *, extra_body_by_model=None)
classmethod
¶
Load request files from out_dir, generating first if needed.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
out_dir
|
Path
|
Directory for JSONL files. |
required |
models
|
Sequence[InferenceModelSpec] | None
|
Required if request files don't exist yet. |
None
|
config
|
RequestsConfig | None
|
Request generation config. Uses defaults if |
None
|
extra_body_by_model
|
dict[str, dict] | None
|
Optional per-model extra fields for
request generation. Keyed by |
None
|
Source code in openg2g/datacenter/workloads/inference.py
openg2g.datacenter.workloads.training
¶
Training workload: typed trace data and periodic overlay evaluation.
TrainingTraceParams
¶
Bases: BaseModel
Parameters for synthetic training-like power trace generation.
Attributes:
| Name | Type | Description |
|---|---|---|
duration_s |
float
|
Total duration (seconds). |
dt_s |
float
|
Timestep (seconds). |
seed |
int
|
Random seed. |
P_hi |
float
|
High plateau power (W). |
P_lo |
float
|
Low plateau power (W). |
sigma_hi |
float
|
Noise std in high plateaus (W). |
sigma_lo |
float
|
Noise std in low plateaus (W). |
seg_lo_range |
tuple[float, float]
|
Duration range for low segments (seconds). |
seg_hi_range |
tuple[float, float]
|
Duration range for high segments (seconds). |
dip_prob_per_sec |
float
|
Expected brief dips per second. |
dip_depth_range |
tuple[float, float]
|
Depth range for brief dips (W below current level). |
dip_dur_range |
tuple[float, float]
|
Duration range for brief dips (seconds). |
smooth_window_s |
float
|
Smoothing window width (seconds). |
ramp_s |
float
|
Initial warm-up ramp duration (seconds). |
ramp_from |
float
|
Power at ramp start (W). |
Source code in openg2g/datacenter/workloads/training.py
TrainingTrace
dataclass
¶
A single-GPU training power trace.
Attributes:
| Name | Type | Description |
|---|---|---|
t_s |
ndarray
|
Time vector (seconds), monotonically increasing. |
power_w |
ndarray
|
Power vector (watts) for one GPU, same length as |
Source code in openg2g/datacenter/workloads/training.py
117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 | |
generate(params=None)
classmethod
¶
Generate a synthetic training-like power trace.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
params
|
TrainingTraceParams | None
|
Generation parameters. Uses defaults if |
None
|
Returns:
| Type | Description |
|---|---|
TrainingTrace
|
A new |
Source code in openg2g/datacenter/workloads/training.py
save(csv_path)
¶
Save the trace to a CSV file.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
csv_path
|
Path
|
Output CSV path. |
required |
Source code in openg2g/datacenter/workloads/training.py
load(csv_path)
classmethod
¶
Load a training trace from CSV.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
csv_path
|
Path
|
Path to CSV with columns |
required |
Source code in openg2g/datacenter/workloads/training.py
ensure(csv_path, params=None)
classmethod
¶
Load from csv_path, generating first if needed.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
csv_path
|
Path
|
Path to the training trace CSV. |
required |
params
|
TrainingTraceParams | None
|
Generation parameters. Required when no cached file exists.
Uses defaults if |
None
|