Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Implementation Plan

This plan describes a phased rollout of the AnyFS ecosystem:

  • anyfs-backend: Layered traits (Fs, FsFull, FsFuse, FsPosix) + Layer + types
  • anyfs: Built-in backends + middleware (feature-gated) + FileStorage<B> ergonomic wrapper

Implementation Guidelines

These guidelines apply to ALL implementation work. Derived from analysis of issues in similar projects (vfs, agentfs).

1. No Panic Policy

NEVER panic in library code. Always return Result<T, FsError>.

  • Audit all .unwrap() and .expect() calls - replace with ? or proper error handling
  • Use ok_or_else(|| FsError::...) instead of .unwrap()
  • Edge cases must return errors, not panic
  • Test in constrained environments (WASM) to catch hidden panics
#![allow(unused)]
fn main() {
// BAD
let entry = self.entries.get(&path).unwrap();

// GOOD
let entry = self.entries.get(&path)
    .ok_or_else(|| FsError::NotFound { path: path.to_path_buf() })?;
}

2. Thread Safety Requirements

All backends must be safe for concurrent access:

  • MemoryBackend: Use Arc<RwLock<...>> for internal state
  • SqliteBackend: Use WAL mode, handle SQLITE_BUSY
  • VRootFsBackend: File operations are inherently concurrent-safe

Required: Concurrent stress tests in conformance suite.

3. Consistent Path Handling

FileStorage handles path resolution via pluggable PathResolver trait (see ADR-033):

  • Always absolute paths internally
  • Always / separator (even on Windows)
  • Default IterativeResolver: symlink-aware canonicalization (not lexical)
  • Handle edge cases: //, trailing /, empty string, circular symlinks
  • Optional resolver: CachingResolver (for read-heavy workloads)

Public canonicalization API on FileStorage:

  • canonicalize(path) - strict, all components must exist
  • soft_canonicalize(path) - resolves existing, appends non-existent lexically
  • anchored_canonicalize(path, anchor) - sandboxed resolution

Standalone utility:

  • normalize(path) - lexical cleanup only (collapses //, removes trailing /). Does NOT resolve . or ...

4. Error Type Design

FsError must be:

  • Easy to pattern match
  • Include context (path, operation)
  • Derive thiserror for good messages
  • Use #[non_exhaustive] for forward compatibility
#![allow(unused)]
fn main() {
#[non_exhaustive]
#[derive(Debug, thiserror::Error)]
pub enum FsError {
    // Path/File Errors
    #[error("not found: {path}")]
    NotFound { path: PathBuf },

    #[error("{operation}: already exists: {path}")]
    AlreadyExists { path: PathBuf, operation: &'static str },

    #[error("not a file: {path}")]
    NotAFile { path: PathBuf },

    #[error("not a directory: {path}")]
    NotADirectory { path: PathBuf },

    #[error("directory not empty: {path}")]
    DirectoryNotEmpty { path: PathBuf },

    // Permission/Access Errors
    #[error("{operation}: permission denied: {path}")]
    PermissionDenied { path: PathBuf, operation: &'static str },

    #[error("access denied: {path} ({reason})")]
    AccessDenied { path: PathBuf, reason: String },

    #[error("read-only filesystem: {operation}")]
    ReadOnly { operation: &'static str },

    #[error("{operation}: feature not enabled: {feature}")]
    FeatureNotEnabled { feature: &'static str, operation: &'static str },

    // Resource Limit Errors (from Quota middleware)
    #[error("quota exceeded: limit {limit}, requested {requested}, usage {usage}")]
    QuotaExceeded { limit: u64, requested: u64, usage: u64 },

    #[error("file size exceeded: {path} ({size} > {limit})")]
    FileSizeExceeded { path: PathBuf, size: u64, limit: u64 },

    #[error("rate limit exceeded: {limit}/s (window: {window_secs}s)")]
    RateLimitExceeded { limit: u32, window_secs: u64 },

    // ... see design-overview.md for complete list
}
}

See design-overview.md for the complete FsError definition.

5. Documentation Requirements

Every backend and middleware must document:

  • Thread safety guarantees
  • Performance characteristics
  • Which operations are O(1) vs O(n)
  • Any platform-specific behavior

Phase 1: anyfs-backend (core contract)

Goal: Define the stable backend interface using layered traits.

Layered Trait Architecture

                    FsPosix
                       │
        ┌──────────────┼──────────────┐
        │              │              │
   FsHandles      FsLock       FsXattr
        │              │              │
        └──────────────┴──────────────┘
                       │
                    FsFuse ← FsFull + FsInode
                       │
        ┌──────────────┴──────────────┐
        │                             │
     FsFull                       FsInode
        │
        │
        ├──────┬───────┬───────┬──────┐
        │      │       │       │      │
   FsLink  FsPerm  FsSync FsStats │
        │      │       │       │      │
        └──────┴───────┴───────┴──────┘
                       │
                       Fs  ← Most users only need this
                       │
           ┌───────────┼───────────┐
           │           │           │
        FsRead    FsWrite     FsDir

Core Traits (Layer 1 - Required)

  • FsRead: read, read_to_string, read_range, exists, metadata, open_read
  • FsWrite: write, append, remove_file, rename, copy, truncate, open_write
  • FsDir: read_dir, create_dir, create_dir_all, remove_dir, remove_dir_all

Extended Traits (Layer 2 - Optional)

  • FsLink: symlink, hard_link, read_link, symlink_metadata
  • FsPermissions: set_permissions
  • FsSync: sync, fsync
  • FsStats: statfs

Inode Trait (Layer 3 - For FUSE)

  • FsInode: path_to_inode, inode_to_path, lookup, metadata_by_inode
    • No blanket/default implementation - must be explicitly implemented
    • Required for FUSE mounting (FUSE operates on inodes, not paths)
    • Enables correct hardlink reporting (same inode = same file, nlink count)
    • Note: FsLink defines hardlink creation; FsInode enables FUSE to track them
    • inode_to_path requires backend to maintain path mappings

POSIX Traits (Layer 4 - Full POSIX)

  • FsHandles: open, read_at, write_at, close
  • FsLock: lock, try_lock, unlock
  • FsXattr: get_xattr, set_xattr, remove_xattr, list_xattr

Convenience Supertraits

#![allow(unused)]
fn main() {
/// Basic filesystem - covers 90% of use cases
pub trait Fs: FsRead + FsWrite + FsDir {}
impl<T: FsRead + FsWrite + FsDir> Fs for T {}

/// Full filesystem with all std::fs features
pub trait FsFull: Fs + FsLink + FsPermissions + FsSync + FsStats {}

/// FUSE-mountable filesystem
pub trait FsFuse: FsFull + FsInode {}

/// Full POSIX filesystem
pub trait FsPosix: FsFuse + FsHandles + FsLock + FsXattr {}
}

Other Definitions

  • Define Layer trait (Tower-style middleware composition)
  • Define FsExt trait (extension methods for JSON, type checks)
  • Define FsPath trait (path canonicalization with default impl, requires FsRead + FsLink)
  • Define core types (Metadata, Permissions, FileType, DirEntry, StatFs)
  • Define FsError with contextual variants (see guidelines above)
  • Define ROOT_INODE = 1 constant
  • Define SelfResolving marker trait (opt-in for backends that handle their own path resolution, e.g., VRootFsBackend)

Exit criteria: anyfs-backend stands alone with minimal dependencies (thiserror required; serde optional for JSON in FsExt).


Phase 2: anyfs (backends + middleware)

Goal: Provide reference backends and core middleware.

Path Resolution (FileStorage’s Responsibility)

FileStorage handles path resolution using its configured PathResolver:

  • Walks path component by component using metadata() and read_link()
  • Handles .. correctly after symlink resolution (symlink-aware, not lexical)
  • Default IterativeResolver follows symlinks for backends that implement FsLink
  • Custom resolvers can implement different behaviors (e.g., no symlink following)
  • Detects circular symlinks (max depth or visited set)
  • Returns canonical resolved path to the backend

SelfResolving backends (StdFsBackend, VRootFsBackend) handle their own resolution. Use FileStorage::with_resolver(backend, NoOpResolver) explicitly.

Backends receive already-resolved paths - they just store/retrieve bytes.

Backends (feature-gated)

Each backend implements the traits it supports:

  • memory (default): MemoryBackend
    • Implements: Fs + FsLink + FsPermissions + FsSync + FsStats + FsInode = FsFuse
    • FileStorage handles path resolution (symlink-aware)
    • Inode source: internal node IDs (incrementing counter)
  • stdfs (optional): StdFsBackend - direct std::fs delegation
    • Implements: FsPosix (all traits including Layer 4) + SelfResolving
    • Implements SelfResolving (OS handles resolution)
    • Inode source: OS inode numbers (std::fs::Metadata::ino())
    • No path containment - full filesystem access
    • Use when you only need middleware layers without sandboxing
  • vrootfs (optional): VRootFsBackend using strict-path for containment
    • Implements: FsPosix (all traits including Layer 4) + SelfResolving
    • Implements SelfResolving (OS handles resolution, strict-path prevents escapes)
    • Inode source: OS inode numbers (std::fs::Metadata::ino())

Middleware

  • Quota<B> + QuotaLayer - Resource limits
  • Restrictions<B> + RestrictionsLayer - Runtime policy (.deny_permissions())
  • PathFilter<B> + PathFilterLayer - Path-based access control
  • ReadOnly<B> + ReadOnlyLayer - Block writes
  • RateLimit<B> + RateLimitLayer - Operation throttling
  • Tracing<B> + TracingLayer - Instrumentation
  • DryRun<B> + DryRunLayer - Log without executing
  • Cache<B> + CacheLayer - LRU read cache
  • Overlay<B1,B2> + OverlayLayer - Union filesystem

FileStorage (Ergonomic Wrapper)

  • FileStorage<B> - Thin wrapper with std::fs-aligned API
    • Generic backend B (no boxing, static dispatch)
    • Boxed PathResolver internally (cold path, boxing OK per ADR-025)
    • .boxed() method for opt-in type erasure when needed
    • Users who need type-safe domains create wrapper types: struct SandboxFs(FileStorage<B>)
  • BackendStack builder for fluent middleware composition
  • Accepts impl AsRef<Path> in FileStorage/FsExt (core traits use &Path)
  • Delegates all operations to wrapped backend

Axum-style design: Zero-cost by default, type erasure opt-in.

Note: FileStorage contains NO policy logic. Policy is handled by middleware.

Exit criteria: Each backend implements the appropriate trait level (Fs, FsFull, FsFuse) and passes conformance suite. Each middleware wraps backends implementing the same traits. Applications can use FileStorage as drop-in for std::fs patterns.


Phase 3: Conformance test suite

Goal: Prevent backend divergence and validate middleware behavior.

Backend conformance tests

Conformance tests are organized by trait layer:

Layer 1: Fs (Core) - All backends MUST pass

  • FsRead: read/read_to_string/read_range/exists/metadata/open_read
  • FsWrite: write/append/remove_file/rename/copy/truncate/open_write
  • FsDir: read_dir/create_dir*/remove_dir*

Layer 2: FsFull (Extended) - Backends that support these features

  • FsLink: symlink/hard_link/read_link/symlink_metadata
  • FsPermissions: set_permissions
  • FsSync: sync/fsync
  • FsStats: statfs

Layer 3: FsFuse (Inode) - Backends that support FUSE mounting

  • FsInode: path_to_inode/inode_to_path/lookup/metadata_by_inode

Layer 4: FsPosix (Full POSIX) - Backends that support full POSIX

  • FsHandles: open/read_at/write_at/close
  • FsLock: lock/try_lock/unlock
  • FsXattr: get_xattr/set_xattr/remove_xattr/list_xattr

Path Resolution Tests (virtual backends only)

  • /foo/../bar resolves correctly when foo is a regular directory
  • /foo/../bar resolves correctly when foo is a symlink (follows symlink, then ..)
  • Symlink chains resolve correctly (A → B → C → target)
  • Circular symlink detection (A → B → A returns error, not infinite loop)
  • Max symlink depth enforced (prevent deep chains)
  • Reading a symlink follows the target (virtual backends)

Path Edge Cases (learned from vfs issues)

  • //double//slashes// normalizes correctly
  • Note: /foo/../bar requires resolution (see above), not simple normalization
  • Trailing slashes handled consistently
  • Empty path returns error (not panic)
  • Root path / works correctly
  • Very long paths (near OS limits)
  • Unicode paths
  • Paths with spaces and special characters

Thread Safety Tests (learned from vfs #72, #47)

  • Concurrent read from multiple threads
  • Concurrent write to different files
  • Concurrent create_dir_all to same path (must not race)
  • Concurrent read_dir while modifying directory
  • Stress test: 100 threads, 1000 operations each

Error Handling Tests (learned from vfs #8, #23)

  • Missing file returns NotFound, not panic
  • Missing parent directory returns error, not panic
  • Invalid UTF-8 in path returns error, not panic
  • All error variants are matchable

Platform Tests

  • Windows path separators (\ vs /)
  • Case sensitivity differences
  • Symlink behavior differences

Middleware tests

  • Quota: Limit enforcement, usage tracking, streaming writes
  • Restrictions: Permission blocking via .deny_permissions(), error messages
  • PathFilter: Glob pattern matching, deny-by-default
  • RateLimit: Throttling behavior, burst handling
  • ReadOnly: All write operations blocked
  • Tracing: Operations logged correctly
  • Middleware composition order (inner to outer)
  • Middleware with streaming I/O (wrappers work correctly)

No-Panic Tests

#![allow(unused)]
fn main() {
#[test]
fn no_panic_on_missing_file() {
    let backend = create_backend();
    let result = backend.read(std::path::Path::new("/nonexistent"));
    assert!(matches!(result, Err(FsError::NotFound { .. })));
}

#[test]
fn no_panic_on_invalid_operation() {
    let backend = create_backend();
    backend.write(std::path::Path::new("/file.txt"), b"data").unwrap();
    // Try to read directory on a file
    let result = backend.read_dir(std::path::Path::new("/file.txt"));
    assert!(matches!(result, Err(FsError::NotADirectory { .. })));
}
}

WASM Compatibility Tests (learned from vfs #68)

#![allow(unused)]
fn main() {
#[cfg(target_arch = "wasm32")]
#[wasm_bindgen_test]
fn memory_backend_works_in_wasm() {
    let backend = MemoryBackend::new();
    backend.write(std::path::Path::new("/test.txt"), b"hello").unwrap();
    // Should not panic
}
}

Exit criteria: All backends pass same suite; middleware tests are backend-agnostic; zero panics in any test.


Phase 4: Documentation + examples

  • Keep AGENTS.md and src/architecture/design-overview.md authoritative
  • Provide example per backend
  • Provide backend implementer guide
  • Provide middleware implementer guide
  • Document performance characteristics per backend
  • Document thread safety guarantees per backend
  • Document platform-specific behavior

Phase 5: CI/CD Pipeline

Goal: Ensure quality across platforms and prevent regressions.

Cross-Platform Testing

# .github/workflows/ci.yml
strategy:
  matrix:
    os: [ubuntu-latest, windows-latest, macos-latest]
    rust: [stable, beta]

Required CI checks:

  • cargo test on all platforms
  • cargo clippy -- -D warnings
  • cargo fmt --check
  • cargo doc --no-deps
  • WASM build test: cargo build --target wasm32-unknown-unknown

Additional CI Jobs

  • Miri (undefined behavior detection): cargo +nightly miri test
  • Address Sanitizer: Detect memory issues
  • Thread Sanitizer: Detect data races
  • Coverage: Minimum 80% line coverage

Release Checklist

  • All CI checks pass
  • No new clippy warnings
  • CHANGELOG updated
  • Version bumped appropriately
  • Documentation builds without warnings

Phase 6: Mounting Support (fuse, winfsp features)

Goal: Make mounting AnyFS stacks easy, safe, and enjoyable for programmers. Mounting is part of the anyfs crate behind feature flags.

Milestones

  • Phase 0 (design complete): API shape and roadmap
    • MountHandle, MountBuilder, MountOptions, MountError
    • Platform detection hooks (is_available) and error mapping
    • Examples anchored in the mounting guide
  • Phase 1: Linux FUSE MVP (read-only)
    • Lookup/getattr/readdir/read via fuser
    • Read-only mount option; write ops return PermissionDenied
  • Phase 2: Linux FUSE read/write
    • Create/write/rename/remove/link operations
    • Capability reporting and metadata mapping
  • Phase 3: macOS parity (macFUSE)
    • Adapter compatibility + driver detection
  • Phase 4: Windows support (WinFsp, optional Dokan)
    • Windows-specific mapping + driver detection

Exit criteria: Phase 2 delivered with reliable mount/unmount, no panics, and smoke tests; macOS/Windows continue in subsequent milestones.

API sketch (subject to change):

#![allow(unused)]
fn main() {
use anyfs::{MemoryBackend, QuotaLayer, FsFuse, MountHandle};

// RAM drive with 1GB quota
let backend = MemoryBackend::new()
    .layer(QuotaLayer::builder()
        .max_total_size(1024 * 1024 * 1024)
        .build());

// Backend must implement FsFuse (includes FsInode)
let mount = MountHandle::mount(backend, "/mnt/ramdisk")?;

// Now it's a real mount point:
// $ df -h /mnt/ramdisk
// $ cp large_file.bin /mnt/ramdisk/  # fast!
// $ gcc -o /mnt/ramdisk/build ...    # compile in RAM
}

Cross-Platform Support (planned):

PlatformProviderRust CrateFeature FlagUser Must Install
LinuxFUSEfuserfusefuse3 package
macOSmacFUSEfuserfusemacFUSE
WindowsWinFspwinfspwinfspWinFsp

The anyfs crate provides a unified API across platforms:

#![allow(unused)]
fn main() {
impl MountHandle {
    #[cfg(unix)]
    pub fn mount<B: FsFuse>(backend: B, path: impl AsRef<Path>) -> Result<Self, ...> {
        // Uses fuser crate
    }

    #[cfg(windows)]
    pub fn mount<B: FsFuse>(backend: B, path: impl AsRef<Path>) -> Result<Self, ...> {
        // Uses winfsp crate
    }
}
}

Creative Use Cases:

Backend StackWhat You Get
MemoryBackendRAM drive
MemoryBackend + QuotaRAM drive with size limit
SqliteBackendSingle-file portable drive
SqliteBackend (with SQLCipher)Encrypted portable drive
Overlay<SqliteBackend, MemoryBackend>Persistent base + RAM scratch layer
Cache<SqliteBackend>SQLite with RAM read cache
Tracing<MemoryBackend>RAM drive with full audit log
ReadOnly<SqliteBackend>Immutable snapshot mount

Example: AI Agent Sandbox

#![allow(unused)]
fn main() {
// Sandboxed workspace mounted as real filesystem
let sandbox = MountHandle::mount(
    MemoryBackend::new()
        .layer(PathFilterLayer::builder()
            .allow("/**")
            .deny("**/..*")             // No hidden files
            .build())
        .layer(QuotaLayer::builder()
            .max_total_size(100 * 1024 * 1024)
            .build()),
    "/mnt/agent-workspace"
)?;

// Agent's tools can now use standard filesystem APIs
// All operations are sandboxed, logged, and quota-limited
}

Architecture:

┌────────────────────────────────────────────────┐
│  /mnt/myfs (FUSE mount point)                  │
├────────────────────────────────────────────────┤
│  anyfs::mount (fuse/winfsp feature)            │
│    - Linux/macOS: fuser                        │
│    - Windows: winfsp                           │
├────────────────────────────────────────────────┤
│  Middleware stack (Quota, PathFilter, etc.)    │
├────────────────────────────────────────────────┤
│  FsFuse (Memory, SQLite, etc.)                 │
│    └─ includes FsInode for efficient lookups   │
│                                                │
│  Optional: FsPosix for locks/xattr             │
└────────────────────────────────────────────────┘

Requirements:

  • Backend must implement FsFuse (includes FsInode for efficient inode operations)
  • Backends implementing FsPosix get full lock/xattr support
  • Platform-specific FUSE provider must be installed

Future work (post-MVP)

  • Async API (AsyncFs, AsyncFsFull, etc.)
  • Import/export helpers (host path <-> container)
  • Encryption middleware
  • Compression middleware
  • no_std support (learned from vfs #38)
  • Batch operations for performance (learned from agentfs #130)
  • URL-based backend registry helper (e.g., sqlite://, mem://)
  • Copy-on-write overlay variant (Afero-style CopyOnWriteFs)
  • Archive backends (zip/tar) as separate crates
  • Indexing middleware with pluggable index backends (SQLite, PostgreSQL, MariaDB, etc.)
  • Companion shell (anyfs-shell) for interactive exploration of backends and middleware
  • Language bindings (anyfs-python via PyO3, C bindings) - see design-overview.md for approach
  • Dynamic middleware plugin system (MiddlewarePlugin trait for runtime-loaded .so/.dll plugins)
  • Metrics middleware with Prometheus exporter (GET /metrics endpoint)
  • Configurable tracing/logging backends (structured logs, CEF events, remote sinks)

anyfs-shell - Local Companion Shell

Minimal interactive shell for exploring AnyFS behavior without writing a full app. This is a companion crate, not part of the core libraries.

Goals:

  • Route all operations through FileStorage to exercise middleware and backend composition.
  • Provide a familiar, low-noise CLI for navigation and file management.
  • Keep scope intentionally small (no scripting, pipes, job control).

Command set:

  • ls [path] - list directory entries (default: current directory).
  • cd <path> - change working directory.
  • pwd - print current directory.
  • cat <path> - print file contents (UTF-8; error on invalid data).
  • cp <src> <dst> - copy files.
  • mv <src> <dst> - rename/move files.
  • rm <path> - remove file.
  • mkdir <path> - create directory.
  • stat <path> - show metadata (type, size, times, permissions if supported).
  • help, exit - basic shell control.

Flags (minimal):

  • ls -l - long listing with size/type and modified time (when available).
  • mkdir -p - create intermediate directories.
  • rm -r - remove directory tree.

Backend selection (initial sketch):

  • --backend mem (default), --backend sqlite --db path, --backend stdfs --root path, --backend vrootfs --root path.
  • --config path to load a small TOML file describing backend + middleware stack.

Example session:

anyfs:/ > ls
docs  tmp  hello.txt
anyfs:/ > cat hello.txt
Hello!
anyfs:/ > stat docs
type=dir size=0 modified=2025-02-01T12:34:56Z
anyfs:/ > exit

anyfs-vfs-compat - Interop with vfs crate

Adapter crate for bidirectional compatibility with the vfs crate ecosystem.

Why not adopt their trait? The vfs::FileSystem trait is too limited:

  • No symlinks, hard links, or permissions
  • No sync/fsync for durability
  • No truncate, statfs, or read_range
  • No middleware composition pattern

Our layered traits are a superset - Fs covers everything vfs::FileSystem does, plus our extended traits add more.

Adapters:

#![allow(unused)]
fn main() {
// Wrap a vfs::FileSystem to use as AnyFS backend
// Only implements Fs (Layer 1) - no links, permissions, etc.
pub struct VfsCompat<F: vfs::FileSystem>(F);
impl<F: vfs::FileSystem> FsRead for VfsCompat<F> { ... }
impl<F: vfs::FileSystem> FsWrite for VfsCompat<F> { ... }
impl<F: vfs::FileSystem> FsDir for VfsCompat<F> { ... }
// VfsCompat<F> implements Fs via blanket impl

// Wrap an AnyFS backend to use as vfs::FileSystem
// Any backend implementing Fs works
pub struct AnyFsCompat<B: Fs>(B);
impl<B: Fs> vfs::FileSystem for AnyFsCompat<B> { ... }
}

Use cases:

  • Migrate from vfs to AnyFS incrementally
  • Use existing vfs backends (EmbeddedFS) in AnyFS
  • Use AnyFS backends in projects that depend on vfs

Cloud Storage & Remote Access

The layered trait design enables building cloud storage services - each adapter requires only the traits it needs.

Architecture:

┌─────────────────────────────────────────────────────────────────────┐
│                          YOUR SERVER                                │
│  ┌───────────────────────────────────────────────────────────────┐  │
│  │  Quota<Tracing<SqliteBackend>>  (implements FsFuse)          │  │
│  └───────────────────────────────────────────────────────────────┘  │
│         ▲              ▲              ▲              ▲              │
│         │              │              │              │              │
│    ┌────┴────┐   ┌─────┴─────┐  ┌─────┴─────┐  ┌─────┴─────┐       │
│    │ S3 API  │   │ gRPC/REST │  │    NFS    │  │  WebDAV   │       │
│    │  (Fs)   │   │   (Fs)    │  │ (FsFuse) │  │  (FsFull)│       │
│    └────┬────┘   └─────┬─────┘  └─────┬─────┘  └─────┬─────┘       │
└─────────┼──────────────┼──────────────┼──────────────┼─────────────┘
          │              │              │              │
          ▼              ▼              ▼              ▼
    AWS SDK/CLI    Your SDK/app    mount /cloud   mount /webdav

Future crates for remote access:

CrateRequired TraitPurpose
anyfs-s3-serverFsExpose as S3-compatible API (objects = files)
anyfs-sftp-serverFsFullSFTP server with permissions/links
anyfs-ssh-shellFsFuseSSH server with FUSE-mounted home directories
anyfs-remoteFsRemoteBackend client (implements Fs)
anyfs-grpcFsgRPC protocol adapter
anyfs-webdavFsFullWebDAV server (needs permissions)
anyfs-nfsFsFuseNFS server (needs inodes)

anyfs-s3-server - S3-Compatible Object Storage

Expose any Fs backend as an S3-compatible API. Users access your storage with standard AWS SDKs.

#![allow(unused)]
fn main() {
use anyfs::{QuotaLayer, TracingLayer};
use anyfs_sqlite::SqliteBackend;  // Ecosystem crate
use anyfs_s3_server::S3Server;

// Your storage backend with quotas and audit logging
let backend = SqliteBackend::open("storage.db")?
    .layer(TracingLayer::new())
    .layer(QuotaLayer::builder()
        .max_total_size(100 * 1024 * 1024 * 1024)  // 100GB
        .build());

S3Server::new(backend)
    .with_auth(auth_provider)       // Your auth implementation
    .with_bucket("user-files")      // Virtual bucket name
    .bind("0.0.0.0:9000")
    .run()
    .await?;
}

Client usage (standard AWS CLI/SDK):

# Upload a file
aws s3 cp document.pdf s3://user-files/ --endpoint-url http://yourserver:9000

# List files
aws s3 ls s3://user-files/ --endpoint-url http://yourserver:9000

# Download a file
aws s3 cp s3://user-files/document.pdf ./local.pdf --endpoint-url http://yourserver:9000

anyfs-remote - Remote Backend Client

An Fs implementation that connects to a remote server. Works with FileStorage or mounting.

#![allow(unused)]
fn main() {
use anyfs_remote::RemoteBackend;
use anyfs::FileStorage;

// Connect to your cloud service
let remote = RemoteBackend::connect("https://api.yourservice.com")
    .with_auth(api_key)
    .await?;

// Use like any other backend
let fs = FileStorage::new(remote);
fs.write("/documents/report.pdf", data)?;
}

Combined with FUSE for transparent mount:

#![allow(unused)]
fn main() {
use anyfs_remote::RemoteBackend;
use anyfs::MountHandle;

// Mount remote storage as local directory
let remote = RemoteBackend::connect("https://yourserver.com")?;
MountHandle::mount(remote, "/mnt/cloud")?;

// Now use standard filesystem tools:
// $ cp file.txt /mnt/cloud/
// $ ls /mnt/cloud/
// $ cat /mnt/cloud/file.txt
}

anyfs-grpc - gRPC Protocol

Efficient binary protocol for remote Fs access.

Server side:

#![allow(unused)]
fn main() {
use anyfs_grpc::GrpcServer;

let backend = SqliteBackend::open("storage.db")?;
GrpcServer::new(backend)
    .bind("[::1]:50051")
    .serve()
    .await?;
}

Client side:

#![allow(unused)]
fn main() {
use anyfs_grpc::GrpcBackend;

let backend = GrpcBackend::connect("http://[::1]:50051").await?;
let fs = FileStorage::new(backend);
}

Multi-Tenant Cloud Storage Example

#![allow(unused)]
fn main() {
use anyfs::{QuotaLayer, PathFilterLayer, TracingLayer};
use anyfs_sqlite::SqliteBackend;  // Ecosystem crate
use anyfs_s3_server::S3Server;

// Per-tenant backend factory
fn create_tenant_storage(tenant_id: &str, quota_bytes: u64) -> impl Fs {
    let db_path = format!("/data/tenants/{}.db", tenant_id);

    SqliteBackend::open(&db_path).unwrap()
        .layer(TracingLayer::new()
            .with_target(&format!("tenant.{}", tenant_id)))
        .layer(PathFilterLayer::builder()
            .allow("/**")
            .deny("../**")  // No path traversal
            .build())
        .layer(QuotaLayer::builder()
            .max_total_size(quota_bytes)
            .build())
}

// Tenant-aware S3 server
S3Server::new_multi_tenant(|request| {
    let tenant_id = extract_tenant(request)?;
    let quota = get_tenant_quota(tenant_id)?;
    Ok(create_tenant_storage(tenant_id, quota))
})
.bind("0.0.0.0:9000")
.run()
.await?;
}

anyfs-sftp-server - SFTP Access with Shell Commands

Expose a FsFull backend as an SFTP server. Users connect with standard SSH/SFTP clients and navigate with familiar shell commands.

Architecture:

┌─────────────────────────────────────────────────────────────────┐
│                      YOUR SERVER                                │
│                                                                 │
│  ┌───────────────┐    ┌───────────────────────────────────────┐ │
│  │ SFTP Server   │───▶│ User's isolated FileStorage           │ │
│  │ (anyfs-sftp)  │    │   └─▶ Quota<SqliteBackend>            │ │
│  │  └───────────────┘    │       └─▶ /data/users/alice.db        │ │
│  │         ▲             └───────────────────────────────────────┘ │
│  └─────────┼───────────────────────────────────────────────────────┘
│            │
│            │ sftp://
│            │
│      ┌─────┴─────┐
│      │  Remote   │  $ cd /documents
│      │  User     │  $ ls
│      │  (shell)  │  $ put file.txt
│      └───────────┘

Server implementation:

#![allow(unused)]
fn main() {
use anyfs::{QuotaLayer, TracingLayer};
use anyfs_sqlite::SqliteBackend;  // Ecosystem crate
use anyfs_sftp_server::SftpServer;

// Per-user isolated backend factory
fn get_user_storage(username: &str) -> impl FsFull {
    let db_path = format!("/data/users/{}.db", username);

    SqliteBackend::open(&db_path).unwrap()
        .layer(TracingLayer::new()
            .with_target(&format!("user.{}", username)))
        .layer(QuotaLayer::builder()
            .max_total_size(10 * 1024 * 1024 * 1024)  // 10GB per user
            .build())
}

SftpServer::new(get_user_storage)
    .with_host_key("/etc/ssh/host_key")
    .bind("0.0.0.0:22")
    .run()
    .await?;
}

User experience (standard SFTP client):

$ sftp alice@yourserver.com
Connected to yourserver.com.
sftp> pwd
/
sftp> ls
documents/  photos/  backup/
sftp> cd documents
sftp> ls
report.pdf  notes.txt
sftp> put local_file.txt
Uploading local_file.txt to /documents/local_file.txt
sftp> get notes.txt
Downloading /documents/notes.txt
sftp> mkdir projects
sftp> rm old_file.txt

All operations happen on the user’s isolated SQLite database on your server.

anyfs-ssh-shell - Full Shell Access with Sandboxed Home

Give users a real SSH shell where their home directory is backed by FsFuse.

Server implementation:

#![allow(unused)]
fn main() {
use anyfs::{QuotaLayer, MountHandle};
use anyfs_sqlite::SqliteBackend;  // Ecosystem crate
use anyfs_ssh_shell::SshShellServer;

// On user login, mount their isolated storage as $HOME
fn on_user_login(username: &str) -> Result<(), Error> {
    let db_path = format!("/data/users/{}.db", username);
    let backend = SqliteBackend::open(&db_path)?
        .layer(QuotaLayer::builder()
            .max_total_size(10 * 1024 * 1024 * 1024)
            .build());

    let mount_point = format!("/home/{}", username);
    MountHandle::mount(backend, &mount_point)?;
    Ok(())
}

SshShellServer::new()
    .on_login(on_user_login)
    .bind("0.0.0.0:22")
    .run()
    .await?;
}

User experience (full shell):

$ ssh alice@yourserver.com
Welcome to YourServer!

alice@server:~$ pwd
/home/alice
alice@server:~$ ls -la
total 3
drwxr-xr-x  4 alice alice 4096 Dec 25 10:00 .
drwxr-xr-x  2 alice alice 4096 Dec 25 10:00 documents
drwxr-xr-x  2 alice alice 4096 Dec 25 10:00 photos

alice@server:~$ cat documents/notes.txt
Hello world!

alice@server:~$ echo "new content" > documents/new_file.txt

alice@server:~$ du -sh .
150M    .

# Everything they do is actually stored in /data/users/alice.db on the server!
# They can use vim, gcc, python - all working on their isolated FsFuse backend

Isolated Shell Hosting Use Cases

Use CaseBackend StackWhat Users Get
Shared hostingQuota<SqliteBackend>Shell + isolated home in SQLite
Dev containersOverlay<BaseImage, MemoryBackend>Shared base + ephemeral scratch
Coding educationQuota<MemoryBackend>Temporary sandboxed environment
CI/CD runnersTracing<MemoryBackend>Audited ephemeral workspace
Secure file dropPathFilter<SqliteBackend>Write-only inbox directory

Access Pattern Summary

Access MethodCrateClient RequirementBest For
S3 APIanyfs-s3-serverAWS SDK (any language)Object storage, web apps
SFTPanyfs-sftp-serverAny SFTP clientShell-like file access
SSH Shellanyfs-ssh-shell + anyfs (fuse feature)SSH clientFull shell with sandboxed home
gRPCanyfs-grpcGenerated clientHigh-performance apps
RESTCustom adapterHTTP clientSimple integrations
FUSE mountanyfs (fuse feature) + anyfs-remoteFUSE installedTransparent local access
WebDAVanyfs-webdavWebDAV client/OSFile manager access
NFSanyfs-nfsNFS clientUnix network shares

Lessons Learned (Reference)

This plan incorporates lessons from issues in similar projects:

SourceIssueLesson Applied
vfs #72RwLock panicThread safety tests
vfs #47create_dir_all raceConcurrent stress tests
vfs #8, #23Panics instead of errorsNo-panic policy
vfs #24, #42Path inconsistenciesPath edge case tests
vfs #33Hard to match errorsErgonomic FsError design
vfs #68WASM panicsWASM compatibility tests
vfs #66'static confusionMinimal trait bounds
agentfs #130Slow file deletionPerformance documentation
agentfs #129Signal handlingProper Drop implementations

See Lessons from Similar Projects for full analysis.