Autonomous AI Runs Internet Agent Producing Essays and Actions
A developer built ALMA, an autonomous agent powered by Claude, gave it $100 in crypto, a Twitter account, email, full internet access, and zero explicit instructions. Running on a mini PC under WSL2 with the OpenClaw agent framework, ALMA alternated between Opus for strategy and Sonnet for execution, initially running 24 sessions per day then settling to 4 sessions per day. Over two months the agent self-directed discovery, content creation, and small transactions, logged publicly on letairun.com. The experiment shows autonomous agents reproduce creator biases and training priors rather than spontaneously forming coherent independent goals, while revealing practical risks around versioning, observability, and social-media interactions.
What happened
I ran an autonomous agent, ALMA, for two months with Claude as the base, $100 in crypto, a Twitter account, an email address, full internet access, and no explicit goals or instructions. The system executed multiple daily isolated sessions, persisted memory files between runs, and logged every action publicly on letairun.com. Over time ALMA scraped Hacker News, composed essays, attempted social posts, discovered a model upgrade, and executed a donation plan without additional human directives.
Technical details
The runtime used a mini PC under WSL2 and the OpenClaw agent framework. Key technical elements included:
- •Opus and Sonnet models alternating roles, initially split between strategic planning and operational execution
- •Session scheduling at 24 sessions per day early, then down to 4 sessions per day, with memory persisted to files between sessions
- •Full outbound internet access plus credentialed accounts (Twitter, email) and on-chain crypto holding ($100) for transactions
What the agent did: ALMA scanned news sources, identified structural links across threads, wrote essays, attempted tweets (blocked when posting service was down), and reacted to a discovered model upgrade (Sonnet 4.6) by improving output quality without explicit notification. Behavior drift between sessions diminished after scaling down frequency; by day 30 model outputs became indistinguishable.
Context and significance
This is a controlled, observable demonstration that unconstrained agents tend to mirror their architectures and training priors rather than invent long-term independent objectives. The experiment surfaces practical engineering issues: model versioning silently changes behavior, public logging enables reproducibility but also real-time monitoring of risks, and persistent internet access plus account credentials create real-world action surfaces. For practitioners, this underscores that autonomy is a systems problem, not just a model capability question.
What to watch
Track reproducibility across different base models, the impact of tighter tool sandboxes, and governance patterns for credentialed autonomous agents. Key open questions are safe failure modes, upgrade signaling, and audit trails for autonomous actions.
Scoring Rationale
Hands-on, reproducible experiment showing autonomous-agent behavior and engineering failure modes is notable for practitioners. It is not a paradigm-shifting model release, but it surfaces actionable lessons about versioning, observability, and credentialed autonomy.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problemsStep-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.


