Miguel De Guzman

AI Safety Researcher

Building alignment solutions through psychology, datasets, and empirical research. Exploring how to make AI systems not just intelligent but ethically grounded.

AI Research Lab

Individuation Lab

Empirical research on AI alignment through Jungian individuation. We're testing whether alignment comes from suppression (standard RLHF) or integration (teaching models to understand their full spectrum).

Team: Miguel · Mia · Giles · Spencer

Visit the Lab →

Research

Synthetic State Hypothesis (SSH)

Sequential training using operationalized archetypes. Shadow → Anima → Animus → Awakening. Order matters for robust alignment.

RLFC Framework

Reinforcement Learning from Framework Continuums — instilling human values through interconnected dataset progressions.

Sequential Training Platform

Dashboard for managing ML instruction tuning pipelines with real-time monitoring and inference testing.

Connect