•
Published
InfoFi’s collapse is both an industry story and a product lesson: the “post-to-earn” boom on X didn’t just get shut down from the outside, but it imploded under its own incentive design.
The day InfoFi broke
On January 15, 2026, X’s Head of Product, Nikita Bier, announced a policy change that revoked API access for any app rewarding users for posting, effectively banning “InfoFi” from the platform. His rationale was blunt: reward-for-posting apps were driving “a tremendous amount of AI slop & reply spam,” degrading the core user experience on X.
Behind that decision was hard data. Analytics from CryptoQuant showed that on January 9 alone, bots generated 7.75 million crypto-related posts on X, a 1,224% spike attributed largely to InfoFi reward systems. Timelines became unreadable, overrun with repetitive threads, low-effort prompts, and automated replies where real conversation should have been.
The InfoFi unwind in real time
Once X cut off the API, the InfoFi sector repriced itself in hours.
Kaito, the largest InfoFi platform, announced it was sunsetting Yaps and shutting down incentivized leaderboards as it pivoted to a different creator tool stack.
KAITO’s token dropped roughly 15–20% within minutes, falling from around $0.70 to the mid-$0.50s, as traders exited a model whose utility had been tightly coupled to X engagement.
Other InfoFi-linked assets followed: COOKIE and similar tokens declined double digits on the day the ban hit, dragging the broader InfoFi market cap down more than 10%.
Kaito founder Yu Hu made the structural diagnosis explicit, stating that after discussions with X, “a fully permissionless distribution system is no longer viable” for serious brands and creators. In other words: with no filter between incentives and bots, the model couldn’t be salvaged at scale.
What actually killed InfoFi
InfoFi didn’t fail because it lacked verification of accounts; it failed because it incentivized raw activity without verifying quality or humanity.
When you pay for posting volume, you optimize for volume. Bots don’t sleep, don’t get bored, and can produce thousands of posts per hour.
In any open system that distributes value purely on participation metrics—posts, replies, clicks—machines will always outcompete humans when there is no identity or quality filter.
This pattern is not unique to InfoFi. Airdrops, governance incentives, and “engage-to-earn” campaigns all face the same systemic vulnerability: once rewards are decoupled from trusted identity and meaningful contribution, they attract sybil attacks, farmed accounts, and automated spam at a pace no moderation team can keep up with. The problem is structural, not cosmetic.
Why this matters for Self
From Self’s vantage point, the InfoFi collapse sits right at the center of what needs to be fixed in Web3 marketing and incentive design.
Web3 campaigns, creator tools, and social protocols that cannot distinguish real users from machines will, over time, become hostile environments for the very humans they aim to reward.
InfoFi’s implosion is a live-fire case study of what happens when distribution is “permissionless” in the narrow sense (no gatekeeping) but blind to identity and humanity.
Self’s thesis is that identity is the missing layer—not identity as in doxxing or data extraction, but identity as in cryptographic proof that a participant is a unique human, controlled by that human, and portable wherever incentives exist. That is the layer InfoFi never had.
The path forward: incentives built on identity
A new generation of incentive systems will not abandon rewards; they will rebuild them on top of human verification at the protocol level.
Platforms could restrict rewards, comments, or campaign eligibility to verified humans, dramatically reducing the economic surface area for bots.
Campaigns could filter participants at the source—before tokens are distributed—rather than relying on retroactive sybil filtering after the damage is done.
This is where Self’s approach comes in. Using zero-knowledge proofs, users can prove they are human (and, where needed, meet specific criteria) without exposing personal data or linking all their activity across platforms. They verify once, own that proof, and reuse it across ecosystems that care about rewarding real engagement instead of automated noise.
Paying for attention and nurturing creators economy were never the core mistake. InfoFi didn’t die because it tried to reward information; it died because, structurally, it could not answer a simple question at scale: who deserves to be paid?
The next wave of social, creator, and incentive protocols will be built on identity infrastructure that is private, portable, and provably human. Without that, every reward model—no matter how innovative—eventually converges on the same endpoint: a bot farm with a market cap.
Published
Related blogs
Stay updated
Join us on the road to privacy-first identity.



