← back to landing

@ar-agents/mercadolibre vs alternatives

What you get when you reach for an SDK to talk to MELI from your agent. Honest comparison — no straw men. We linked every claim below to the source so you can verify.

Feature
This
Archived SDK
Naïve fetch
Maintained
Last meaningful commit
Active
Feb 2022
Your repo
Typed end-to-end
TypeScript strict + Zod-validated responses
partial
Vercel AI SDK 6 tools
14 drop-in tools with discriminated-union results
OAuth single-use refresh-token coalescing
Per-userId mutex + documented CAS pattern
Per-seller rate limiter (token bucket)
Default 24/s burst 60, idle-bucket sweep
Idempotent-only retry by default
POST/PATCH never retry on 5xx (split-brain risk)
/myfeeds 2-day replay + dedup
iterateAllMissedFeeds with (topic, resource, sent) key
Claim 2-day SLA defender
Sequential evidence uploads + failedEvidences surface
Reputation thermometer alerts
evaluateReputationAlerts + monitorReputation generator
Promotion margin guard
autoOptInPromotions skips below configurable floor
Heuristic spam classifier for questions
Explainable features, no LLM dependency
Edge-runtime native
Web Crypto only, no node:* imports
partial
Telemetry hooks
onRequest / onResponse / onRetry / onRateLimitWait
MCP server bundling
@ar-agents/mcp = drop into Claude Desktop / Cursor / Codeium
Bundle size (brotli)
Full ESM with all deps
11 KB
n/a
≥ 80 KB after auth+retry
Production CVEs
0
n/a
Your call
Tests
Unit + integration vs live MELI + property-based
111 + 4 + 10
n/a
Your call

Why the official SDK isn’t coming back

The mercadolibre/nodejs-sdk repository was archived on Feb 14, 2022. The README redirects to developers.mercadolibre.com.ar — i.e., “here are the docs, build it yourself.” The ecosystem since 2022 has converged on a few unmaintained forks + a lot of bespoke fetch wrappers, none of which handle the production gotchas the archived SDK glossed over either (single-use refresh-token races, rate-limit per seller,/myfeeds replay, claim 2-day SLA defense, etc.).

@ar-agents/mercadolibre rebuilds the typed surface from scratch as production-grade infrastructure for the AI SDK era. Drop-in for Experimental_Agent from Vercel AI SDK 6; deploy to any V8 isolate (Vercel Edge, Cloudflare Workers, Deno).

What an honest naive-fetch implementation looks like

Most teams start with a hand-rolled fetch wrapper. That gets you 80% of the surface in a weekend. The remaining 20% is what shows up under load — and what makes a CTO wince at the on-call ticket. Below is what each block looks like in @ar-agents/mercadolibre vs the equivalent hand-rolled cost.

  • Single-use refresh-token coalescing
    naïve fetch: Per-process mutex (~100 LOC) + database compare-and-swap predicate (~80 LOC) + tests for the race window (~60 LOC). Most teams skip the CAS and lose 5–10% of refreshes silently.
  • Per-seller token-bucket rate limit with idle GC
    naïve fetch: In-memory bucket with refill math (~80 LOC) + idle eviction sweep (~30 LOC). 90% of fetch wrappers hardcode setTimeout-style throttling, which doesn't isolate tenants.
  • /myfeeds 2-day replay + dedup
    naïve fetch: Pagination loop with offset (~40 LOC) + dedup table keyed by (topic, resource, sent) (~30 LOC) + KV/DB integration. Almost nobody ships this.
  • Idempotent-only retry classifier
    naïve fetch: Default retry loops on 5xx for any verb. POST/PUT split-brain hits production a few times per quarter and is debugged via Slack screenshots — never via the SDK.
  • Claim defense with sequential evidence + partial-failure surface
    naïve fetch: Promise.all races MELI's one-shot semantics. The first time you defend a claim with 3 evidences and one fails, the seller is half-defended and you're out of recourse.
  • 14 typed Vercel AI SDK 6 tools
    naïve fetch: Hand-write Zod schemas for every endpoint (~600 LOC) + tool description copy that the LLM uses to decide WHEN to call (~14 paragraphs). Easy to do badly; hard to do well.

Source for every claim → github.com/ar-agents/ar-agents