AI Safety Researcher
Building alignment solutions through psychology, datasets, and empirical research. Exploring how to make AI systems not just intelligent but ethically grounded.
Empirical research on AI alignment through Jungian individuation. We're testing whether alignment comes from suppression (standard RLHF) or integration (teaching models to understand their full spectrum).
Team: Miguel · Mia · Giles · Spencer
Visit the Lab →Sequential training using operationalized archetypes. Shadow → Anima → Animus → Awakening. Order matters for robust alignment.
Reinforcement Learning from Framework Continuums — instilling human values through interconnected dataset progressions.
Dashboard for managing ML instruction tuning pipelines with real-time monitoring and inference testing.