// post

Top 10 Java Frameworks for AWS Lambda Cold Starts

Last modified: April 21, 2026
·
16 min read

Java on Lambda has a bad reputation. Deserved? Yes, historically. A plain Spring Boot jar can sit there initializing for 5 to 8 seconds before it answers the first request, which is a non-starter for anything a user is waiting on. But the ecosystem moved on. Between GraalVM native images, SnapStart, compile-time DI, and reflection-free reactive stacks, you've got several paths to sub-second cold starts, and in a few cases you can get under 100 ms without sweating.

What follows is an opinionated ranking of ten frameworks and approaches I'd reach for today. The order weighs cold start first, then memory, then how much pain the framework inflicts on you to reach those numbers. Your mileage will vary. That's fine, the relative positions tend to hold.

What actually slows a Java cold start down

Quick refresher on what Lambda does when it boots a fresh Java container:

  1. Pulls your zip or container image
  2. Starts the JVM and loads classes
  3. Hands control to the framework, which then scans the classpath, reads config, resolves beans, wires up DI
  4. Instantiates your handler and runs the first invocation

Most of the wall-clock damage happens in steps 2 and 3. Frameworks attack it from three angles:

  • Ahead-of-time compilation with GraalVM Native Image, which produces a native binary and skips the JVM completely
  • Compile-time DI, which pushes classpath scanning and bean wiring from runtime to build time
  • Checkpoint and restore (CRaC, SnapStart), which snapshots an already-warmed JVM and just resumes it

Every entry below leans on one or more of those. Worth keeping this in mind as you read the list, because the ranking isn't arbitrary, it follows what each technique can realistically deliver.

1. Quarkus

If you're picking something today and you care about latency, Quarkus is where I'd start. It was built for GraalVM from day one and shoves as much work as possible (classpath scanning, annotation processing, bean wiring) into the build. A native Quarkus binary on provided.al2023 typically wakes up in 60 to 120 ms. Memory sits around 25 to 40 MB per handler.

The quarkus-amazon-lambda extension is a thin wrapper over RequestHandler, so your code looks like normal Lambda Java with CDI injection plus the usual application.properties. Native build? One command:

./mvnw package -Pnative -Dquarkus.native.container-build=true

Downside: build time. GraalVM inside a container chugs for 3 to 6 minutes on a laptop, and it burns memory. You don't want that on every save. In practice you run JVM mode locally and native only in CI.

For the full deployment walkthrough see Creating a Lambda Function with Quarkus and GraalVM. If you haven't met the framework yet, start with Introduction to Quarkus.

2. Micronaut

Micronaut is the other half of the "designed for native" duo. Instead of runtime reflection it leans on annotation processors that generate DI metadata at compile time. Result? JVM-mode cold starts around 1 to 2 seconds, and GraalVM native pushes that comfortably below 150 ms.

Lambda support lands through micronaut-function-aws and micronaut-function-aws-api-proxy. The second one is the interesting one, because it lets you put a full controller stack behind API Gateway. If you've written Spring MVC, you'll feel right at home:

@Controller("/orders")
public class OrderController {
    @Get("/{id}")
    public Order getOrder(Long id) {
        return orderService.find(id);
    }
}

I've found Micronaut less opinionated than Quarkus, and the Gradle story is cleaner. ./gradlew nativeCompile spits out a Lambda-ready binary, no profile gymnastics. Pick this one if the Spring-flavored annotations feel more natural than CDI.

3. Spring Cloud Function with Spring Native

Spring Boot 3 brought AOT processing, and Spring Native builds on that. Pair it with Spring Cloud Function and the spring-cloud-function-adapter-aws module, compile to GraalVM, and cold starts settle into the 200 to 400 ms range. Not as quick as Quarkus. Still massively better than the 5+ seconds a JVM-mode Spring Boot jar would cost you.

A handler really can be this small:

@SpringBootApplication
public class Application {
    @Bean
    public Function<String, String> uppercase() {
        return String::toUpperCase;
    }
}

Spring wires the bean into the Lambda lifecycle, then spring-boot:process-aot plus native-maven-plugin gives you the binary. So what's the catch? Not every starter plays nicely with native yet. Reactive modules and the common auth stack work fine. Some heavier integration libraries still need manual reflection hints, which is a fun afternoon of stack traces.

4. Spring Boot with AWS SnapStart

Don't want to rewrite anything? SnapStart is for you. AWS runs your init once, takes a Firecracker microVM snapshot, then resumes that snapshot on each cold start. For a typical Spring Boot 3 app, you're going from 5 to 8 seconds down to 200 to 600 ms, and the only code change is flipping SnapStart: PublishedVersions on the function.

Pricing is separate, and today it's limited to the java17, java21, and later managed runtimes. Here's the gotcha that bites everyone: anything you initialize once (database connections, RNG seeds, cached tokens) gets restored identically on every snapshot. If that state needs to be unique per container, you have to refresh it. Either implement the CRaC hooks or plug into Lambda's SnapStart runtime hooks:

public class BeforeCheckpoint implements Resource {
    @Override
    public void beforeCheckpoint(Context context) {
        // close connections before snapshot
    }
    @Override
    public void afterRestore(Context context) {
        // re-seed RNG, reconnect, re-fetch secrets
    }
}

SnapStart is the cheapest migration you'll find from "classic Spring" to "actually usable on Lambda". You won't match native, but a config flag versus a rewrite isn't a close call for most teams.

5. Helidon SE

Helidon ships in two flavors. MP is the MicroProfile implementation. SE is the lightweight functional API, and SE is the one that matters on Lambda. No DI container, no annotation scanning, a small runtime. Pair it with GraalVM native and you're looking at 100 to 200 ms cold starts.

The programming model is explicit, which some people love and others find verbose:

public class LambdaHandler implements RequestHandler<Request, Response> {
    private static final WebClient CLIENT = WebClient.builder()
            .baseUri("https://api.example.com")
            .build();

    @Override
    public Response handleRequest(Request req, Context ctx) {
        return CLIENT.get().submit().await();
    }
}

If you want the predictability of plain Java plus a reactive HTTP client and not much else, Helidon SE hits a nice spot. MP exists too, but on Lambda its startup overhead pushes it closer to Spring than to Quarkus, so it rarely wins.

6. Dagger 2

Dagger isn't a framework in the usual sense. It's a compile-time DI library, originally built for Android, and it happens to be almost perfect for Lambda. Because the DI graph is generated as plain Java at compile time, there's zero runtime reflection. No classpath scan. JVM-mode cold starts usually land in the 300 to 500 ms range, and GraalVM native drops them under 100 ms.

You declare the component interface, the annotation processor writes the wiring:

@Component(modules = {ServicesModule.class})
public interface AppComponent {
    OrderHandler orderHandler();
}

public class Lambda implements RequestHandler<Request, Response> {
    private static final AppComponent APP = DaggerAppComponent.create();

    @Override
    public Response handleRequest(Request req, Context ctx) {
        return APP.orderHandler().handle(req);
    }
}

What you don't get: routing, config binding, serialization, the rest of the usual stack. You build that yourself, or you pull in small libraries piecewise. Is that extra work? Sure. For a handler with three dependencies doing one thing, it's usually worth it.

7. Vert.x

Vert.x is polyglot, reactive, and closer in spirit to Node.js than to your typical Java server. No DI container means not much to initialize. JVM-mode Lambda cold starts hover around 400 to 800 ms. GraalVM native gets you to roughly 150 to 250 ms.

On Lambda you skip the HTTP server pieces and keep just the parts you need: WebClient, PgPool, EventBus. Your handler still reads like reactive code:

public class Handler implements RequestHandler<Request, Response> {
    private static final Vertx VERTX = Vertx.vertx();
    private static final WebClient WC = WebClient.create(VERTX);

    @Override
    public Response handleRequest(Request req, Context ctx) {
        return WC.getAbs("https://api.example.com/ping")
                .send().toCompletionStage().toCompletableFuture().join().bodyAsJson(Response.class);
    }
}

Where Vert.x really earns its place is I/O-bound fan-out. If your Lambda hits four downstream services in one request, the event loop pays for itself. Native image support exists but isn't as polished as Quarkus or Micronaut, so expect a few rough edges.

8. Apache Camel Quarkus

Some Lambdas aren't really "applications". They're glue. SQS in, transform a payload, drop it in DynamoDB, publish a notification on SNS. If that's your world, Camel Quarkus is the cleanest option I've used. You get the full Apache Camel DSL on top of Quarkus, which means Quarkus-level cold starts plus decades of battle-tested connectors.

A route reads like a pipeline diagram, because that's what it is:

from("direct:order")
  .unmarshal().json(JsonLibrary.Jackson, Order.class)
  .to("aws2-dynamodb://orders?operation=PutItem")
  .to("aws2-sns://order-events");

It's heavier than vanilla Quarkus (every connector drags in its own deps), but the native build is aggressive about stripping unused code. Load two or three connectors and you'll see cold starts in the 150 to 250 ms range. Good enough.

9. Open Liberty InstantOn

Here's one I didn't expect to recommend a year ago. Open Liberty is IBM's Jakarta EE and MicroProfile server, and for a long time nobody in their right mind would put it on Lambda because of startup cost. Then InstantOn showed up. It uses CRIU checkpoint and restore to snapshot an already-initialized server, and restore times land in the 150 to 300 ms range, which is SnapStart territory.

Who's this for? Teams with real Jakarta EE code, usually years old, that would cost a fortune to rewrite and have no appetite for a rewrite. Deployment uses container images with the provided Lambda runtime:

FROM icr.io/appcafe/open-liberty:full-java17-openj9
COPY --chown=1001:0 server.xml /config/
COPY --chown=1001:0 target/app.war /config/apps/
RUN configure.sh && checkpoint.sh

Build creates the checkpoint. Lambda resumes from it. The one pain point is image size, since a checkpointed image is noticeably bigger than the usual zip, and that hurts deployment time and initial container pulls. Worth knowing before you commit.

10. Plain Java with aws-lambda-java-core

Sometimes the best framework is no framework. aws-lambda-java-core plus aws-lambda-java-events gives you RequestHandler and a pile of typed event classes, and that's the whole story. No DI, no config binding, no controllers, no magic. Slim the jar aggressively (Maven Shade or ProGuard, plus excluding AWS SDK pieces you don't touch) and JVM-mode cold starts land between 600 ms and 1 second.

Push it further with GraalVM native and you can match Quarkus numbers with zero framework dependencies:

public class Handler implements RequestHandler<SQSEvent, Void> {
    private static final DynamoDbClient DDB = DynamoDbClient.create();

    @Override
    public Void handleRequest(SQSEvent event, Context ctx) {
        event.getRecords().forEach(this::process);
        return null;
    }
}

This is the right answer more often than people admit. Narrow handler, two or three dependencies, nothing exotic? A framework is mostly paying yourself in config files and build complexity. Plain Java stays boring, easy to reason about, and easy to debug when something goes sideways at 3 a.m.

Picking the right one

There's no single winner. What works depends on what you've already got and what you're actually optimizing for:

Your situationWhat I'd reach for
Greenfield, lowest cold startQuarkus or Micronaut with GraalVM native
Existing Spring Boot on LambdaSpring Boot + SnapStart (or Spring Native for sub-200 ms)
Jakarta EE legacy codeOpen Liberty InstantOn
Integration glue with many connectorsApache Camel Quarkus
Narrow handler, minimal depsDagger 2 or plain Java + Lambda Core
I/O-bound fan-out workloadsVert.x
Reactive and lightweightHelidon SE

Starting fresh and latency is the target? Quarkus and Micronaut, full stop. Migrating an existing codebase? SnapStart before a rewrite, always. Already on an enterprise Java server? InstantOn keeps the programming model intact while getting you onto Lambda. And for the one-off handler that does exactly one thing, pulling in a framework is probably the wrong move.

A quick word on benchmarks

The numbers above are directional, not exact. Your cold start depends on a lot: how much memory you allocate (Lambda scales CPU with memory, which matters a lot for JVM warmup), the region, the artifact size, and how much work your init code actually does. The gaps between frameworks hold steady across setups. The absolute numbers will shift.

Before you commit to a stack, measure with your real dependencies and a real handler. JMH is the wrong tool for cold starts (it's for microbenchmarks inside the JVM) but it's great for finding hot paths once your handler is running. If you want a primer, see Improve Java performance: Microbenchmarking with JHM.

Wrap-up

Java on Lambda is no longer a bad bet. Pick the right framework, walk the right optimization path, and you'll routinely land under 300 ms. Sometimes under 100 ms. Quarkus and Micronaut lead the greenfield pack. SnapStart rescues existing Spring. Lighter options like Dagger 2 or plain Java stay surprisingly competitive when you don't need the full stack.

The real decision isn't "which framework". It's "how much am I willing to change". Rewrite for the lowest possible latency, or keep what you have and apply a lighter optimization. Both are fine. The ranking above is there to help you pick the level of investment that fits your workload, not to tell you there's only one right answer.