Skip to content

What's new in beta.36

v2.0.0-beta.36 is mostly a Cloudflare-themed release — three new Worker bindings (Images, Hyperdrive, R2 lifecycle policies), all from outside the core team. Props to the contributors throughout, and full credits in the Contributors section.

Cloudflare’s image transformation runtime, exposed as a Worker binding. The Effect-native client takes an Effect Stream<Uint8Array> input — typically the request body — and either streams a transformed image back out or returns typed metadata.

// alchemy.run.ts — declare the pipeline resource
export const Pipeline = yield* Cloudflare.Images({ name: "PIPELINE" });
// in your Worker
export default class ImageWorker extends Cloudflare.Worker<ImageWorker>()(
"ImageWorker",
{ main: import.meta.filename },
Effect.gen(function* () {
const images = yield* Cloudflare.Images.bind(Pipeline);
return {
fetch: Effect.gen(function* () {
const request = yield* HttpServerRequest;
// Probe the upload — returns format, width, height, etc.
if (request.url.endsWith("/info")) {
const info = yield* images.info(request.stream);
return yield* HttpServerResponse.json(info);
}
// Transform: resize to 512×512 WebP, stream the result back.
const transformed = yield* images
.input(request.stream)
.transform({ width: 512, height: 512 })
.output({ format: "image/webp" });
return HttpServerResponse.stream(transformed.body);
}),
};
}).pipe(Effect.provide(Cloudflare.ImagesBindingLive)),
) {}

Thanks to Alex (#237) for the contribution. Images.ts

Hyperdrive resources are bindable on Workers the same way as R2, KV, D1. Two flavors to know:

Declare a Hyperdrive over an existing origin string. Useful when the database lives outside of Alchemy (existing RDS, Supabase, etc.):

const Hyperdrive = yield* Cloudflare.Hyperdrive("DbPool", {
origin: { connectionString: process.env.POSTGRES_URL! },
});

Wire it onto a Neon branch you already deployed. Neon.Branch exposes a pre-parsed origin output that Cloudflare.Hyperdrive takes directly — Alchemy orders the deploy graph correctly:

src/Db.ts
export const NeonDb = Effect.gen(function* () {
const project = yield* Neon.Project("app-db", { region: "aws-us-east-1" });
const branch = yield* Neon.Branch("app-branch", { project });
return { project, branch };
});
export const Hyperdrive = Effect.gen(function* () {
const { branch } = yield* NeonDb;
return yield* Cloudflare.Hyperdrive("app-hyperdrive", {
origin: branch.origin, // direct (non-pooled) Neon endpoint
});
});

Bind it on the Worker and read the pooled connection string at runtime — Hyperdrive does the connection pooling so the Worker can spin up a fresh pg client per request without exhausting the database:

export default class Api extends Cloudflare.Worker<Api>()(
"Api",
{
main: import.meta.path,
compatibility: { flags: ["nodejs_compat"] },
},
Effect.gen(function* () {
const hd = yield* Cloudflare.Hyperdrive.bind(Hyperdrive);
return {
fetch: Effect.gen(function* () {
const connectionString = yield* hd.connectionString;
const rows = yield* Effect.promise(async () => {
const client = new Client({ connectionString });
await client.connect();
try {
const r = await client.query("SELECT now() as now");
return r.rows;
} finally {
await client.end().catch(() => {});
}
});
return yield* HttpServerResponse.json({ rows });
}),
};
}).pipe(Effect.provide(Cloudflare.HyperdriveConnectionLive)),
) {}

Thanks to Baptiste Arnaud (#282) for the contribution. Tutorial › Neon + Hyperdrive

Object lifecycle policies on R2Bucket — age-based deletion, storage-class transitions, multipart-upload abort — declared inline alongside the rest of the bucket config. Max 1000 rules per bucket; pass an empty array (or omit) to clear all rules.

const Logs = yield* Cloudflare.R2Bucket("Logs", {
lifecycleRules: [
{
id: "archive-then-delete",
prefix: "logs/",
// After 60 days: move to Infrequent Access.
storageClassTransitions: [
{
condition: { type: "Age", maxAge: 60 * 60 * 24 * 60 },
storageClass: "InfrequentAccess",
},
],
// After 365 days: delete entirely.
deleteObjectsTransition: {
condition: { type: "Age", maxAge: 60 * 60 * 24 * 365 },
},
// Stale multipart uploads — drop after 7 days.
abortMultipartUploadsTransition: {
condition: { type: "Age", maxAge: 60 * 60 * 24 * 7 },
},
},
],
});

Each entry is reconciled in place — edit the array and the next deploy applies the delta against R2’s live policy.

Thanks to Baptiste Arnaud (#284) for the contribution. See Cloudflare R2 docs › Object lifecycles for the underlying behavior.

  • *.localhost resolution in dev mode. bun alchemy dev now routes *.localhost hostnames through a custom undici dispatcher, so cross-Worker fetch calls to e.g. https://backend.localhost resolve to the local sidecar instead of failing DNS.
  • Smoke tests install canaries at the workspace root rather than per-fixture. Batched smoke runs are noticeably faster on cold caches.

Big thank-you to everyone who shipped code in this beta:

  • Alex — Cloudflare Images binding (#237)
  • Baptiste Arnaud — Hyperdrive in Worker bindings (#282)
  • Baptiste Arnaud — R2 lifecycle rules (#284)