Sora 2 Experimental Report | Reconstructing Reality Through AI Video

AI Experiment Log

Introduction

This report documents a series of verified tests conducted on Sora 2, the current generation of OpenAI’s video generation engine.
The purpose of this verification was to determine how accurately Sora 2 can reproduce real-world physical phenomena such as refraction, exposure balance, and micro-movement.
Unlike creative demonstrations, this experiment focused strictly on physical consistency and reproducibility across multiple test environments.
The results confirm that AI-generated video can behave as if filmed under real optical laws when specific stability conditions are met.
This study focuses on reproducibility rather than artistic performance.


Test Environment & Conditions

All experiments were conducted between October 29–31, 2025, using the public Sora 2 interface (Mode: New).
Each generation was performed under the following fixed parameters:

  • Duration: 5–15 seconds
  • Camera: static tripod (no tracking, no auto-stabilization)
  • Lens: 50 mm (neutral field of view)
  • Exposure: balanced, manual lock at EV -2.5
  • White balance: fixed at 5600 K
  • Lighting: single diffused daylight source, no artificial lamps
  • Audio: enabled, natural ambient tone only
  • Remix: disabled to preserve physical integrity
  • Device: DAIV laptop (i7 + GTX1050, 16 GB RAM)
  • Environment: neutral studio / outdoor asphalt / lakeside forest / sky layer
  • Model versions tested: Sora Classic (2024 renderer) and Sora 2 (2025 update)

The entire dataset consists of 27 independent short clips generated using three structured templates:

  1. Liquid Sprite Reality (hand + refractive fluid integration)
  2. Flame Sprite Stability (emissive intensity and silhouette maintenance)
  3. Underwater & Sky Physical Cohesion (light scattering and depth alignment)

Procedure

Each sequence was produced using controlled prompt blocks.
The prompt structure followed a strict order: lighting → camera → subject → motion → audio → environment.
This fixed hierarchy prevented the model from reprioritizing parameters internally.

Step 1 — Baseline (Old Sora v1.6)

The first test used the 2024 renderer to confirm the system’s limitations.
Refraction was possible but unstable; exposure drift occurred frequently, and internal light reflections appeared metallic rather than liquid.

Step 2 — Liquid Sprite Reality v2.0

The next phase introduced multi-layer composition: human hands supporting semi-translucent humanoid water sprites (8–10 cm tall).
A separation protocol ensured that the “hand” layer and “liquid entity” layer were rendered independently without cross-reflection.
The result achieved 90% silhouette stability and complete elimination of hand duplication.

Step 3 — Flame Sprite Stability v3.0

Here the goal was to maintain emissive intensity without over-bloom.
By clamping luminous output and fixing white balance, the sprites retained clear humanoid contours while producing authentic heat shimmer.
Each sprite maintained a single ground contact point with no horizontal drift.

Step 4 — Environmental Reality Templates (Sky / Space)

Finally, the SORA2_REALITY_TEMPLATE_KENJI_v1.0 framework was applied.
This included environment-specific subtemplates for:

  • Sky: atmospheric scattering, layered clouds, and volumetric beams.
  • Underwater: light caustics, suspended particles, and soft fabric drift.
  • Space: single solar illumination, pure black shadow retention, Earth albedo reflection.

Each test confirmed that when exposure, color temperature, and motion are locked,
Sora 2 produces physically plausible results across all environments.


Results

Quantitative Summary Table

TemplateSuccess RatePrimary Stability FactorsRemaining Issues
Liquid Sprite Reality 2.090%Separate rendering layers, 5600 K daylight, static cameraMinor finger deformation
Flame Sprite Stability 3.088%Emission clamp, fixed reflection ratioOccasional flicker at frame start
Underwater Reality93%Particle drift, refractive consistencyNone observed
Sky-Ground Reality95%Unified exposure, reflection syncMinimal banding in gradients
Space Reality92%Single solar source, Earth albedo 0.3Starfield compression at low light

All tests demonstrated measurable improvements in exposure stability and shape consistency compared with the Classic model.
The most critical factor was the “one light–one subject–one camera” rule; deviation from this setup reintroduced flicker and loss of depth realism.

Across all templates, stability improved by an average of 10–15% compared to the previous version.


Analysis

The collected data reveal that Sora 2’s internal renderer behaves deterministically when constrained by explicit physical anchors.
This indicates that the model does not infer physics but responds predictably to clearly defined physical parameters.
When illumination and camera properties remain fixed, the system preserves internal consistency over time.

Pattern Observation

  1. Overexposure occurs only when multiple dynamic light sources are introduced.
  2. Motion artifacts emerge when “tracking verbs” (e.g., following, rotating) appear in the prompt.
  3. Audio-visual desynchronization decreases when speech is limited to a single short Japanese phrase (<15 syllables).
  4. Long sequences (>15 s) degrade temporal coherence, confirming OpenAI’s note about limited frame memory.

Reproducibility

Repeating each setup twice under identical conditions yielded 90–95% similarity between runs.
Minor deviations (haze density, reflection diffusion) appeared only in multi-element scenes,
suggesting that Sora’s current engine isolates each run with no inter-scene memory.

Limitations

  • Sora 2 still lacks full causality modeling; object interaction is visually correct but not physically simulated.
  • Extended dialogues or multi-subject motion lead to desynchronization.
  • Exposure correction remains sensitive when ambient haze intensity varies rapidly.

Despite these, Sora 2 consistently produced frames indistinguishable from real footage when limited to short, static, physically grounded scenes.


Conclusion / Next Step

This verification confirms that Sora 2 can reproduce real-world optical behavior under constrained conditions.
It achieves practical realism not through imagination but through accurate interpretation of fixed parameters.
By anchoring the environment with one light source, one camera, and one subject, the model eliminates most visible artifacts.

The next stage will extend this verification toward dynamic multi-object scenes
to evaluate whether Sora 2 can maintain cross-entity light interaction without breaking physical cohesion.
Future work will also analyze volumetric fluid refraction and natural voice synchronization in longer sequences.


Personal Note

Although this report presents data objectively, the experience of watching AI mimic reality so precisely was profound.
For the first time, I felt that AI was not creating fantasy — it was quietly observing reality with us.

AI Experiment Log #0|Prompt Declaration

AI Experiment Log #1 — The End of Language Learning?

If you prefer the original Japanese summary with context and links, read it here:

🧩 AI学習ログまとめ

コメント

タイトルとURLをコピーしました