# Go Jdenticon Examples This directory contains practical examples demonstrating various usage patterns for the go-jdenticon library. ## Examples ### `concurrent-usage.go` Demonstrates safe and efficient concurrent usage patterns: - Package-level functions with singleton generator - Shared generator instances for optimal performance - Cache performance monitoring - High-throughput concurrent generation **Run the example:** ```sh go run examples/concurrent-usage.go ``` **Run with race detection:** ```sh go run -race examples/concurrent-usage.go ``` The race detector confirms that all concurrent patterns are thread-safe. ## CLI Batch Processing The CLI tool includes high-performance batch processing capabilities: **Create a test input file:** ```sh echo -e "alice@example.com\nbob@example.com\ncharlie@example.com" > users.txt ``` **Generate icons concurrently:** ```sh go run ./cmd/jdenticon batch users.txt --output-dir ./avatars --concurrency 4 ``` **Performance comparison:** ```sh # Sequential processing time go run ./cmd/jdenticon batch users.txt --output-dir ./avatars --concurrency 1 # Concurrent processing (default: CPU count) time go run ./cmd/jdenticon batch users.txt --output-dir ./avatars ``` The batch processing demonstrates significant performance improvements through concurrent processing. ## Key Takeaways 1. **All public functions are goroutine-safe** - You can call any function from multiple goroutines 2. **Generator reuse is optimal** - Create one generator, share across goroutines 3. **Icons are immutable** - Safe to share generated icons between goroutines 4. **Caching improves performance** - Larger cache sizes benefit concurrent workloads 5. **Monitor with metrics** - Use `GetCacheMetrics()` to track performance ## Performance Notes From the concurrent usage example: - **Single-threaded equivalent**: ~4-15 icons/sec (race detector overhead) - **Concurrent (20 workers)**: ~333,000 icons/sec without cache hits - **Memory efficient**: ~2-6 KB per generated icon - **Thread-safe**: No race conditions detected The library is highly optimized for concurrent workloads and scales well with the number of CPU cores.