Program database and continual learning across experiments
| ID | Hypothesis | Domain | Score | Δ | Status | Time | |
|---|---|---|---|---|---|---|---|
| EXP-047 | LR schedule: cosine annealing with T_max=500 | Transformer opt. | 0.742 | +1.4% | KEPT | 2m ago | |
| ╰ | Dropout rate increase 0.2→0.3 in FFN layers | Transformer opt. | 0.728 | -0.3% | REVERTED | 8m ago | |
| ╰ | Weight decay 0.01 → 0.001 | Transformer opt. | 0.731 | +0.9% | KEPT | 14m ago | |
| ╰ | Gradient clipping max_norm 1.0 → 0.5 | Transformer opt. | 0.724 | +0.4% | KEPT | 19m ago | |
| ╰ | WordPiece tokenization for code-specific tokens | Transformer opt. | 0.718 | +3.4% BLEU | FLAGGED | 25m ago | |
| EXP-046 | Graph conv pooling: mean → attention-weighted sum | Drug discovery | 0.902 | +1.2% | KEPT | 28m ago | |
| ╰ | SMILES dropout augmentation rate 0.1 → 0.2 | Drug discovery | 0.896 | +0.7% | KEPT | 34m ago | |
| ╰ | Node feature normalization: batch norm → layer norm | Drug discovery | 0.888 | -0.5% | REVERTED | 41m ago | |
| EXP-047 | Label smoothing ε=0.1 applied to SFT objective | Transformer opt. | 0.713 | +0.1% | KEPT | 47m ago | |
| ╰ | Batch size linear ramp 32 → 128 over first 1k steps | Transformer opt. | 0.704 | +0.5% | KEPT | 53m ago | |
| EXP-015 | TD3 policy update delay: 2 → 4 steps | RL robotics | 0.661 | +2.1% | KEPT | 58m ago | |
| ╰ | Exploration noise std 0.1 → 0.2 with decay schedule | RL robotics | 0.653 | +0.9% | KEPT | 1h 5m ago | |
| ╰ | Critic network: 2 layers → 3 layers, hidden 256 | RL robotics | 0.648 | -0.6% | REVERTED | 1h 12m ago | |
| ╰ | Polyak averaging coefficient 0.995 → 0.999 | RL robotics | 0.655 | +1.1% | FLAGGED | 1h 19m ago | |
| EXP-046 | 3D conformer features concatenated to fingerprint | Drug discovery | 0.881 | +2.3% | FLAGGED | 1h 27m ago | |
| ╰ | Flash Attention 2 kernel — latency vs throughput | Transformer opt. | 0.700 | −1ms | KEPT | 1h 44m ago | |
| ╰ | Layer norm epsilon: 1e-5 → 1e-8 | Transformer opt. | 0.698 | +0.2% | KEPT | 1h 59m ago | |
| ╰ | Increased attention dropout 0.0 → 0.1 | Transformer opt. | 0.691 | -1.1% | REVERTED | 2h 14m ago | |
| EXP-091 | Multi-task loss weighting: inverse class frequency | Drug discovery | 0.874 | +1.8% | KEPT | 2h 38m ago | |
| ╰ | Dropout 0.1 → 0.3 in message-passing layers | Drug discovery | 0.857 | -0.4% | REVERTED | 2h 52m ago | |
| EXP-013 | Reward normalization with running mean/std | RL robotics | 0.633 | +0.8% | KEPT | 3h 10m ago | |
| ╰ | Policy network hidden size 256 → 512 | RL robotics | 0.622 | -0.5% | REVERTED | 3h 25m ago |
Predictive power across experiments
47 experiments
47 experiments
47 experiments
47 experiments