Connect SlateDB to S3
This tutorial shows you how to connect SlateDB to S3. We’ll use LocalStack to simulate S3.
Create a project
Section titled “Create a project”Let’s start by creating a new Rust project:
cargo init slatedb-playgroundcd slatedb-playground
Add dependencies
Section titled “Add dependencies”Now add SlateDB and the object_store
crate to your Cargo.toml
:
cargo add slatedb object-store tokio --features object-store/aws
You will need to have LocalStack running. You can install it using Homebrew:
brew install localstack/tap/localstack-cli
localstack start -d
For a more detailed setup, see the LocalStack documentation.
You’ll also need the AWS CLI:
brew install awscli
Initialize AWS
Section titled “Initialize AWS”SlateDB requires a bucket to work with S3.
Create your S3 bucket:
Section titled “Create your S3 bucket:”# Create S3 bucketaws --endpoint-url=http://localhost:4566 s3api create-bucket --bucket slatedb --region us-east-1
Write some code
Section titled “Write some code”Stick this into your src/main.rs
file:
use object_store::aws::S3ConditionalPut;use slatedb::Db;use std::sync::Arc;
#[tokio::main]async fn main() -> anyhow::Result<()> { let object_store = Arc::new( object_store::aws::AmazonS3Builder::new() // These will be different if you are using real AWS .with_allow_http(true) .with_endpoint("http://localhost:4566") .with_access_key_id("test") .with_secret_access_key("test") .with_bucket_name("slatedb") .with_region("us-east-1") .with_conditional_put(S3ConditionalPut::ETagMatch) .build()?, );
let db = Db::open("/tmp/slatedb_s3_compatible", object_store.clone()).await?;
// Call db.put with a key and a 64 meg value to trigger L0 SST flush let value: Vec<u8> = vec![0; 64 * 1024 * 1024]; db.put(b"k1", value.as_slice()).await?; db.close().await?;
Ok(())}
Run the code
Section titled “Run the code”Now you can run the code:
cargo run
This will write a 64 MiB value to SlateDB.
Check the results
Section titled “Check the results”Now’ let’s check the root of the bucket:
% aws --endpoint-url=http://localhost:4566 s3 ls s3://slatedb/test/ PRE compacted/ PRE manifest/ PRE wal/
There are three folders:
compacted
: Contains the compacted SST files.manifest
: Contains the manifest files.wal
: Contains the write-ahead log files.
Let’s check the wal
folder:
% aws --endpoint-url=http://localhost:4566 s3 ls s3://slatedb/test/wal/2024-09-04 18:05:57 64 00000000000000000001.sst2024-09-04 18:05:58 67108996 00000000000000000002.sst
Each of these SST files is a write-ahead log entry. They get flushed based on the flush_interval
config. The last entry is 64 MiB, which is the value we wrote.
Finally, let’s check the compacted
folder:
% aws --endpoint-url=http://localhost:4566 s3 ls s3://slatedb/test/compacted/2024-09-04 18:05:59 67108996 01J6ZVEZ394GCJT1PHZYY1NZGP.sst
Again, we see the 64 MiB SST file. This is the L0 SST file that was flushed with our value. Over time, the WAL entries will be removed, and the L0 SSTs will be compacted into higher levels.