What Are Kotlin Coroutines?
Kotlin Coroutines provide an elegant and powerful approach to writing asynchronous and concurrent code in Kotlin. They allow developers to replace complex callback-based code with sequential-looking logic that’s easier to read, maintain, and reason about. Coroutines are essentially lightweight threads that can be suspended and resumed without blocking the underlying thread, making them incredibly efficient for both I/O-bound and CPU-bound tasks.
Key Concepts of Kotlin Coroutines
- Coroutine: A lightweight, cooperatively-scheduled thread of execution that can be suspended and resumed later.
- Suspend Function: A function that can pause and resume without blocking the thread, allowing for asynchronous, non-blocking operations.
- Dispatcher: Determines on which thread or thread pool a coroutine runs, allowing fine control over concurrency and parallelism.
- Scope: Defines the lifecycle and context for coroutines, ensuring structured concurrency and safe cancellation.
50 Detailed Examples
1. Starting a Coroutine
import kotlinx.coroutines.*
fun main() = runBlocking {
println("Start Coroutine!")
launch {
println("Coroutine is working!")
}
println("Coroutine Ended!")
}
Explanation: Using runBlocking creates a blocking scope where we can launch a new coroutine. The launch builder starts a new coroutine on the current thread by default. Although it looks synchronous, the second println inside launch actually runs concurrently. The runBlocking block will not complete until its child coroutines have finished.
Why this is useful: It gives you a straightforward way to start using coroutines for asynchronous tasks without blocking the entire program.
Improvement: Use launch with specific dispatchers (e.g. Dispatchers.IO) for more efficient thread usage when doing I/O operations.
2. Suspend Functions
suspend fun fetchData(): String {
delay(1000)
return "Data fetched!"
}
fun main() = runBlocking {
println("Fetching...")
val result = fetchData()
println(result)
}
Explanation: A suspend function like fetchData can use other suspend functions (such as delay) to pause its execution without blocking threads. When resumed, it continues from the same point.
Why this is useful: Suspend functions enable writing asynchronous code in a synchronous style, improving readability and maintainability.
Improvement: Chain multiple suspend calls or wrap them with error handling and timeouts for robust asynchronous workflows.
3. Dispatcher: Controlling Threads
fun main() = runBlocking {
launch(Dispatchers.IO) {
println("Running in the IO thread!")
}
}
Explanation: Dispatchers control where coroutines run. Dispatchers.IO uses a pool of threads optimized for blocking I/O, such as file or network operations. This prevents blocking the main thread.
Why this is useful: It helps to segregate CPU-intensive work (Dispatchers.Default) from I/O operations (Dispatchers.IO) and UI updates (Dispatchers.Main), improving performance and responsiveness.
Improvement: Use context switching with withContext for fine-grained control of where a particular part of code runs.
4. CoroutineScope
class MyViewModel {
private val scope = CoroutineScope(Dispatchers.Main)
fun doWork() {
scope.launch {
println("Doing some work on the Main thread!")
}
}
}
Explanation: A CoroutineScope binds coroutines to a particular lifecycle. Here, a ViewModel might have a scope tied to the main thread. When the ViewModel is cleared, the scope can be cancelled, preventing memory leaks.
Why this is useful: Maintaining a scope ensures structured concurrency and avoids orphaned coroutines running without proper cancellation.
Improvement: Use predefined scopes like viewModelScope in Android or custom scopes to better manage coroutines tied to components’ lifecycles.
5. Async: Getting Results
fun main() = runBlocking {
val deferred = async {
delay(1000)
"Result from async!"
}
println(deferred.await())
}
Explanation: async creates a coroutine that returns a result, encapsulated in a Deferred object. Calling await() suspends the current coroutine until the result is ready, allowing you to retrieve values from asynchronous tasks easily.
Why this is useful: Collecting results from concurrent tasks is essential for parallel computations and aggregating data.
Improvement: Combine multiple async tasks to run in parallel, then await all results for efficient parallelization.
6. Structured Concurrency with coroutineScope
fun main() = runBlocking {
coroutineScope {
launch { delay(500); println("Task 1 done") }
launch { delay(1000); println("Task 2 done") }
}
println("All tasks completed!")
}
Explanation: coroutineScope ensures that all its child coroutines finish before it returns. This maintains a clear structure: no coroutine left behind unintentionally.
Why this is useful: Provides structured concurrency, improving the reliability and predictability of concurrent code.
Improvement: Combine async and launch inside a coroutineScope for complex orchestrations, ensuring all subtasks complete or fail together.
7. Error Handling with try-catch
fun main() = runBlocking {
try {
launch {
throw Exception("Something went wrong!")
}.join()
} catch (e: Exception) {
println("Caught exception: ${e.message}")
}
}
Explanation: Coroutines can throw exceptions just like regular code. Using a try-catch block around coroutine launches allows you to handle exceptions gracefully.
Why this is useful: Robust error handling prevents app crashes and provides a mechanism to recover or inform the user.
Improvement: Combine with supervisorScope or structured concurrency patterns to isolate and manage failures in complex tasks.
8. Using withContext for Context Switching
suspend fun processData() {
withContext(Dispatchers.IO) {
println("Processing data in background!")
}
}
fun main() = runBlocking {
processData()
}
Explanation: withContext changes the coroutine’s context, enabling thread switches to perform certain parts of a task in a different dispatcher, then return to the original context seamlessly.
Why this is useful: Ensures CPU-bound work doesn’t block the main thread, and I/O operations run efficiently in the background.
Improvement: Use withContext judiciously to avoid overhead from excessive context switching.
9. Flow Basics
import kotlinx.coroutines.flow.*
fun numbersFlow(): Flow = flow {
for (i in 1..3) {
emit(i)
delay(300)
}
}
fun main() = runBlocking {
numbersFlow().collect { println("Received: $it") }
}
Explanation: A Flow emits multiple values sequentially. Unlike suspend functions (which return a single result), flows can produce a stream of values over time.
Why this is useful: Flows handle asynchronous data streams, perfect for event streams, UI updates, or continuous data sources.
Improvement: Combine flows, apply operators, or switch contexts to build complex reactive pipelines.
10. Mutex for Shared Resources
import kotlinx.coroutines.sync.Mutex
import kotlinx.coroutines.sync.withLock
val mutex = Mutex()
var counter = 0
suspend fun increment() {
mutex.withLock {
counter++
}
}
fun main() = runBlocking {
val jobs = List(100) {
launch { increment() }
}
jobs.forEach { it.join() }
println("Counter: $counter")
}
Explanation: A Mutex provides mutual exclusion to prevent race conditions when multiple coroutines modify the same resource.
Why this is useful: Ensures thread-safe updates to shared state, crucial in concurrent applications.
Improvement: Consider other concurrency tools like Atomic variables or Channel if locks become a bottleneck.
11. launch vs async
fun main() = runBlocking {
val job = launch {
println("Launch: no result return")
}
val deferred = async {
delay(500)
"Async: returns result"
}
job.join()
println(deferred.await())
}
Explanation: launch is for fire-and-forget coroutines that don’t return a value. async is for coroutines that compute a result you can retrieve with await.
Why this is useful: Choose launch when you just want to do work, async when you need a value returned.
Improvement: For complex operations, combine multiple async calls to run tasks in parallel and then await their results together.
12. Lifecycle-Awareness with viewModelScope (Android)
// In Android, inside a ViewModel
class UserViewModel : ViewModel() {
fun fetchData() {
viewModelScope.launch {
val data = getUserData() // suspend function
println("Data: $data")
}
}
suspend fun getUserData(): String {
delay(1000)
return "User Info"
}
}
Explanation: viewModelScope is a lifecycle-aware scope provided by Android’s architecture components. Coroutines launched in this scope are automatically cancelled when the ViewModel is cleared.
Why this is useful: Prevents memory leaks and wasted work when the associated UI component is no longer active.
Improvement: Combine with LiveData or StateFlow to seamlessly update UI on data changes.
13. Timeout with withTimeout
fun main() = runBlocking {
try {
withTimeout(500) {
delay(1000)
println("This won't print")
}
} catch (e: TimeoutCancellationException) {
println("Task timed out!")
}
}
Explanation: withTimeout enforces a maximum time limit on a suspend block. If it doesn’t complete in time, it throws a TimeoutCancellationException.
Why this is useful: Prevents unexpected long-running operations from hanging indefinitely, improving reliability.
Improvement: Use withTimeoutOrNull for a non-exception approach that returns null on timeout.
14. Combining Results with async
fun main() = runBlocking {
val result1 = async { "Hello" }
val result2 = async { "World" }
println("${result1.await()} ${result2.await()}")
}
Explanation: Multiple async tasks can run in parallel, and you can combine their results once they are ready.
Why this is useful: Optimizes performance by executing independent tasks concurrently.
Improvement: Handle exceptions and cancellations to ensure partial failures don’t break the entire workflow.
15. supervisorScope
fun main() = runBlocking {
supervisorScope {
launch {
throw Exception("Failed Child")
}
launch {
delay(500)
println("Other child completes even if one fails")
}
}
}
Explanation: supervisorScope ensures that the failure of one child coroutine does not cancel the entire scope. Unlike coroutineScope, it isolates child failures.
Why this is useful: Allows partial success, making your system more resilient.
Improvement: Use supervisorScope in scenarios where some tasks are non-critical and should not fail the whole operation.
16. Retry Logic
suspend fun fetchWithRetry(): String {
repeat(3) { attempt ->
try {
if (attempt < 2) throw Exception("Temporary error")
return "Success"
} catch (e: Exception) {
println("Retrying... attempt $attempt")
}
}
return "Failed after retries"
}
fun main() = runBlocking {
println(fetchWithRetry())
}
Explanation: Manual retry logic handles transient errors, attempting a task multiple times before giving up.
Why this is useful: Improves robustness, especially with flaky network calls.
Improvement: Combine with exponential backoff (see example 49) or custom retry policies for smarter retries.
17. Using Channels
import kotlinx.coroutines.channels.Channel
fun main() = runBlocking {
val channel = Channel()
launch {
for (x in 1..5) channel.send(x)
channel.close()
}
for (y in channel) {
println("Received: $y")
}
}
Explanation: Channels are a way of communicating between coroutines safely. Producer coroutines can send values, and consumer coroutines can receive them.
Why this is useful: Channels decouple senders and receivers, enabling safe asynchronous message passing.
Improvement: Explore buffered channels, conflated channels, and integration with flows for more advanced patterns.
18. Handling Multiple Flows with zip
fun numbersFlow() = flowOf(1, 2, 3)
fun lettersFlow() = flowOf("A", "B", "C")
fun main() = runBlocking {
numbersFlow().zip(lettersFlow()) { n, l -> "$n$l" }
.collect { println(it) }
}
Explanation: zip pairs up values from two flows, emitting combined results only when both have emitted corresponding values.
Why this is useful: Synchronizes emissions from multiple streams into logical pairs.
Improvement: Use combine if you need to react immediately when any flow emits a new value rather than waiting for pairs.
19. Handling Exceptions in Flow
fun faultyFlow() = flow {
emit(1)
throw Exception("Error in Flow")
}
fun main() = runBlocking {
faultyFlow()
.catch { e -> println("Caught: ${e.message}") }
.collect { println(it) }
}
Explanation: The catch operator in a flow handles upstream exceptions, allowing the flow to recover or log errors without crashing.
Why this is useful: Ensures that errors in asynchronous streams are handled gracefully.
Improvement: Combine catch with retry for robust error recovery strategies.
20. Flow is Cold
fun myFlow() = flow {
println("Flow started")
emit(1)
}
fun main() = runBlocking {
println("Before collection")
myFlow().collect { println(it) }
println("After collection")
}
Explanation: Flows are cold streams, meaning the flow block only runs when there is a subscriber calling collect. Each collector re-triggers the emission.
Why this is useful: Flows don’t do unnecessary work unless requested, saving resources.
Improvement: Convert cold flows to hot flows (like StateFlow or SharedFlow) if you need continuous emission regardless of collectors.
21. StateFlow for UI State
val _state = MutableStateFlow("Initial")
val state: StateFlow = _state
fun main() = runBlocking {
launch {
state.collect { println("State: $it") }
}
delay(500)
_state.value = "Updated"
}
Explanation: StateFlow holds a single up-to-date state and emits updates to collectors. Collectors always get the latest value immediately.
Why this is useful: Perfect for representing UI state that changes over time (e.g., in ViewModels).
Improvement: Use StateFlow instead of LiveData on Android for structured concurrency and full Kotlin support.
22. SharedFlow for Events
val _event = MutableSharedFlow()
val event = _event.asSharedFlow()
fun main() = runBlocking {
launch {
event.collect { println("Event: $it") }
}
_event.emit("New Event!")
}
Explanation: SharedFlow broadcasts values to multiple collectors, acting like a hot stream of events.
Why this is useful: Ideal for one-time events, notifications, or messages shared across multiple listeners.
Improvement: Adjust replay and buffer parameters in SharedFlow to handle backpressure and event delivery guarantees.
23. Unit Testing with runTest
@Test
fun testFetchData() = runTest {
val result = fetchData() // a suspend function
assertEquals("Data fetched!", result)
}
Explanation: runTest enables testing suspend functions and coroutines in a controlled environment. It provides a virtual clock and deterministic execution, making tests reliable.
Why this is useful: Ensures your asynchronous logic is correct and reproducible under test conditions.
Improvement: Use structured concurrency in tests and virtual time to test delays and timeouts predictably.
24. lifecycleScope in Fragments (Android)
// In a Fragment
viewLifecycleOwner.lifecycleScope.launch {
val data = fetchData()
println("Data from fragment: $data")
}
Explanation: lifecycleScope ties coroutines to the Android UI lifecycle, automatically cancelling them when the Fragment’s view is destroyed, preventing leaks and wasted work.
Why this is useful: Cleaner and safer code in UI components on Android.
Improvement: Use lifecycleScope with flow operators like repeatOnLifecycle to automatically collect flows at appropriate lifecycle states.
25. Progress Updates
suspend fun showProgress() {
for (i in 1..5) {
println("Progress: $i")
delay(200)
}
}
fun main() = runBlocking {
showProgress()
}
Explanation: Simple suspend functions with delays can simulate progress updates or animate UI elements based on time intervals.
Why this is useful: Displaying incremental progress keeps users informed about long-running operations.
Improvement: Combine such patterns with flows or channels to stream updates to UI elements more reactively.
More Advanced Examples
26. Parallel Network Calls
suspend fun fetchUser(): String {
delay(1000)
return "User Data"
}
suspend fun fetchPosts(): String {
delay(1000)
return "Posts Data"
}
fun main() = runBlocking {
val userDeferred = async { fetchUser() }
val postsDeferred = async { fetchPosts() }
val userData = userDeferred.await()
val postsData = postsDeferred.await()
println("Fetched: $userData and $postsData")
}
Explanation: Multiple async calls run in parallel, speeding up total response time when tasks are independent.
Why this is useful: Ideal for loading multiple data sets concurrently, reducing user wait times.
Improvement: Add error handling and use supervisorScope if one call failing shouldn’t cancel all requests.
27. CoroutineContext and Custom Threads
val customContext = newSingleThreadContext("MyThread")
fun main() = runBlocking {
launch(customContext) {
println("Running on custom thread: ${Thread.currentThread().name}")
}
}
Explanation: Creating a custom context gives you control over the threading environment. newSingleThreadContext dedicates a single thread to the coroutines launched within it.
Why this is useful: Useful for legacy code or specialized tasks that need isolation from shared thread pools.
Improvement: Reuse custom dispatchers or use Executors to manage thread pools more efficiently.
28. Database Operations with IO Dispatcher
suspend fun saveToDatabase(data: String) {
withContext(Dispatchers.IO) {
println("Saving $data to DB")
}
}
fun main() = runBlocking {
saveToDatabase("Sample Data")
}
Explanation: Offloading database writes to an IO dispatcher prevents the main thread from blocking, improving UI responsiveness.
Why this is useful: Keeps your app responsive, which is crucial for good UX on Android or desktop applications.
Improvement: Wrap your database calls (Room, SQLite) in withContext(Dispatchers.IO) to follow best practices.
29. Periodic Tasks
suspend fun periodicTask() {
while (true) {
println("Task at ${System.currentTimeMillis()}")
delay(1000)
}
}
fun main() = runBlocking {
launch { periodicTask() }
delay(3000)
println("Stopping periodic task")
}
Explanation: A coroutine can run indefinitely, performing tasks at intervals. Here, it prints a timestamp every second until cancelled.
Why this is useful: Perfect for polling external services, sending heartbeats, or scheduled maintenance tasks.
Improvement: Use cancellation or structured concurrency to stop these tasks gracefully.
30. Debouncing User Input
var job: Job? = null
suspend fun processInput(input: String) {
println("Processing: $input")
}
fun main() = runBlocking {
repeat(5) { i ->
job?.cancel()
job = launch {
delay(200)
processInput("Input $i")
}
}
job?.join()
}
Explanation: Debouncing waits for a pause in user input before processing. Canceling the previous job ensures that rapid inputs won’t all be processed, only the last one.
Why this is useful: Prevents unnecessary processing and improves performance in search bars or live filters.
Improvement: Implement a more sophisticated debounce mechanism with flows and operators like debounce.
31. Cancel Coroutine Execution
fun main() = runBlocking {
val job = launch {
repeat(1000) { i ->
println("Doing work $i")
delay(200)
}
}
delay(600)
job.cancel()
println("Cancelled the job!")
}
Explanation: Coroutines are cooperative: calling job.cancel() requests cancellation. The coroutine checks for cancellation at suspension points (like delay).
Why this is useful: Allows stopping background tasks that are no longer needed, freeing up resources.
Improvement: Gracefully handle cancellation by using try-finally blocks or the ensureActive() function.
32. Exception Handling in Structured Concurrency
fun main() = runBlocking {
val scopeJob = launch {
launch {
throw Exception("Child exception")
}
}
scopeJob.join()
println("Parent completes with exception handled")
}
Explanation: In a regular coroutineScope or launch, exceptions cancel the entire scope. This ensures that failures don’t go unnoticed.
Why this is useful: Structured concurrency ensures exceptions are properly propagated, preventing silent failures.
Improvement: Use supervisorScope or custom exception handlers if you need partial fault-tolerance.
33. Transform Flow Data with map
fun numberFlow() = flowOf(1, 2, 3)
fun main() = runBlocking {
numberFlow()
.map { it * 2 }
.collect { println(it) }
}
Explanation: Flow operators like map transform emitted values. Here, each number is doubled before collection.
Why this is useful: Reactive transformations keep code concise and flexible.
Improvement: Chain multiple operators (filter, map, take, etc.) to create powerful data-processing pipelines.
34. Buffering Flow Data
fun slowProducer() = flow {
for (i in 1..3) {
emit(i)
delay(300)
}
}
fun main() = runBlocking {
slowProducer()
.buffer()
.collect { value ->
delay(500)
println("Collected: $value")
}
}
Explanation: buffer() allows the flow to emit items without waiting for the collector to process them, improving throughput at the cost of extra memory.
Why this is useful: Enhances performance when the producer is faster than the consumer.
Improvement: Adjust buffer size and consider backpressure strategies to handle large streams efficiently.
35. Combining Flows with combine
fun flowA() = flowOf("A1", "A2")
fun flowB() = flowOf("B1", "B2")
fun main() = runBlocking {
flowA().combine(flowB()) { a, b -> "$a-$b" }
.collect { println(it) }
}
Explanation: combine emits a new value whenever any of the input flows emit, pairing the latest values of each flow.
Why this is useful: React to changes from multiple data sources simultaneously, always having the latest combined state.
Improvement: Combine more than two flows or chain multiple combine operations for complex data synchronization.
36. Sharing Flows with shareIn
fun mainFlow() = flowOf(1,2,3)
fun main() = runBlocking {
val shared = mainFlow().shareIn(this, SharingStarted.Eagerly, replay = 1)
launch { shared.collect { println("Collector1: $it") } }
launch { shared.collect { println("Collector2: $it") } }
}
Explanation: shareIn converts a cold flow into a hot flow, where multiple collectors share the same emission sequence without each triggering a new execution.
Why this is useful: Saves resources by avoiding redundant computations for each collector.
Improvement: Use different SharingStarted strategies and replay values to fit your app’s needs.
37. Retry on Flow Error
fun errorFlow() = flow {
emit(1)
throw Exception("Flow error")
}
fun main() = runBlocking {
errorFlow()
.retry(3) { it is Exception }
.catch { println("Eventually failed") }
.collect { println(it) }
}
Explanation: The retry operator attempts to re-collect the flow if an exception occurs, up to a specified number of times.
Why this is useful: Helps handle transient issues in streaming data, such as temporary network failures.
Improvement: Add logic to increase the delay between retries, or only retry certain exceptions.
38. Custom Coroutine Dispatcher
val singleThread = newSingleThreadContext("SingleThread")
fun main() = runBlocking {
launch(singleThread) {
println("Running on single thread: ${Thread.currentThread().name}")
}
}
Explanation: Custom dispatchers let you define exactly which threads your code runs on. This is good for fine-tuning performance or ensuring certain tasks never overlap.
Why this is useful: Greater control over threading can improve performance or avoid issues with thread affinity in certain libraries.
Improvement: Use Executors to create thread pools or leverage advanced schedulers.
39. Testing Flows
@Test
fun testFlow() = runTest {
val flow = flowOf(1, 2, 3)
val result = flow.toList()
assertEquals(listOf(1,2,3), result)
}
Explanation: Testing flows is straightforward by collecting all emitted values and comparing them to expected outputs.
Why this is useful: Ensures correctness and prevents regressions in reactive data pipelines.
Improvement: Use virtual time in tests to control and test time-based flow operators like debounce and timeout.
40. flatMapConcat
fun outerFlow() = flowOf(1, 2)
fun innerFlow(num: Int) = flow { emit("$num-A"); emit("$num-B") }
fun main() = runBlocking {
outerFlow()
.flatMapConcat { innerFlow(it) }
.collect { println(it) }
}
Explanation: flatMapConcat sequentially collects values from each inner flow. It waits for one inner flow to complete before moving to the next.
Why this is useful: Ensures order and determinism when processing sequences of sequences.
Improvement: Consider flatMapMerge or flatMapLatest if you need concurrency or cancellation of previous flows.
41. Cancellation Propagation
fun main() = runBlocking {
val parentJob = launch {
launch {
repeat(10) {
println("Child: $it")
delay(200)
}
}
}
delay(500)
parentJob.cancel()
println("Parent cancelled, children stopped")
}
Explanation: Cancelling a parent job cascades the cancellation to all its children, ensuring you don’t end up with lingering coroutines.
Why this is useful: Maintains structured concurrency and prevents resource leaks.
Improvement: Combine with a structured concurrency approach to always know which coroutines are active.
42. Throttling Events
var eventJob: Job? = null
suspend fun handleEvent(event: String) {
println("Handling $event")
}
fun main() = runBlocking {
val events = listOf("Click1", "Click2", "Click3")
events.forEach {
eventJob?.cancel()
eventJob = launch {
delay(300)
handleEvent(it)
}
}
eventJob?.join()
}
Explanation: Throttling ensures only the most recent event is processed if multiple events occur in quick succession.
Why this is useful: Prevents expensive operations from triggering too often, improving performance.
Improvement: Use flow operators like debounce or throttle for a more declarative approach.
43. Lazy Coroutine Execution
fun main() = runBlocking {
val lazyJob = launch(start = CoroutineStart.LAZY) {
println("Lazy start")
}
println("Before starting lazy job")
lazyJob.start()
lazyJob.join()
}
Explanation: Lazily started coroutines do not run until you explicitly start them, allowing you to prepare coroutines ahead of time but only run them when needed.
Why this is useful: Useful for deferring expensive initialization work until absolutely necessary.
Improvement: Combine lazy coroutines with async for controlled and optimized resource usage.
44. Timeout with Flow
fun infiniteFlow() = flow {
while (true) {
emit("Emitting...")
delay(500)
}
}
fun main() = runBlocking {
try {
withTimeout(2000) {
infiniteFlow().collect { println(it) }
}
} catch (e: TimeoutCancellationException) {
println("Flow timed out")
}
}
Explanation: Applying withTimeout around a collecting operation prevents an infinite flow from running forever.
Why this is useful: Ensures your app remains responsive and doesn’t get stuck waiting for endless streams.
Improvement: Use a controlled flow completion condition or takeWhile operator for graceful termination.
45. Parallel API Calls with Result Aggregation
suspend fun callApi1(): String {
delay(500)
return "API1 Result"
}
suspend fun callApi2(): String {
delay(500)
return "API2 Result"
}
fun main() = runBlocking {
val results = coroutineScope {
val r1 = async { callApi1() }
val r2 = async { callApi2() }
listOf(r1.await(), r2.await())
}
println("Results: $results")
}
Explanation: Executes two API calls in parallel, then awaits both results for a combined outcome.
Why this is useful: Improves performance in network-bound tasks, providing a faster user experience.
Improvement: Add error handling or use supervisorScope to handle partial failures gracefully.
46. Timeout with Multiple Coroutines
fun main() = runBlocking {
try {
withTimeout(1000) {
launch { delay(2000) }
println("Will not reach here")
}
} catch (e: TimeoutCancellationException) {
println("Tasks timed out")
}
}
Explanation: If any child coroutine doesn’t complete within the timeout, the whole block is cancelled. This enforces time limits on groups of tasks.
Why this is useful: Ensures that operations don’t hang indefinitely, providing a better fail-fast behavior.
Improvement: Combine with fallback logic or retries for robust time-sensitive tasks.
47. Coroutine Scopes in Clean Architecture
class Repository(private val dispatcher: CoroutineDispatcher) {
suspend fun fetchData(): String = withContext(dispatcher) {
"Repo Data"
}
}
class ViewModel(private val repository: Repository) {
fun loadData() = CoroutineScope(Dispatchers.Main).launch {
println(repository.fetchData())
}
}
fun main() = runBlocking {
val repo = Repository(Dispatchers.IO)
val vm = ViewModel(repo)
vm.loadData().join()
}
Explanation: Passing dispatchers via DI ensures separation of concerns. The repository does IO work on IO dispatcher, the ViewModel on the main thread, maintaining a clean architecture.
Why this is useful: In large codebases, decoupling concurrency details improves testability and maintainability.
Improvement: Use viewModelScope and dependency injection frameworks like Hilt or Koin for even cleaner design.
48. flatMapMerge for Concurrency
fun parentFlow() = flowOf(1, 2)
fun childFlow(num: Int) = flow {
emit("$num-X")
delay(300)
emit("$num-Y")
}
fun main() = runBlocking {
parentFlow()
.flatMapMerge { childFlow(it) }
.collect { println(it) }
}
Explanation: flatMapMerge runs inner flows concurrently and merges their emissions into a single stream, improving throughput.
Why this is useful: Handle multiple data sources in parallel, providing faster results.
Improvement: Use flatMapLatest if you only care about the latest emission, cancelling previous flows as new ones arrive.
49. Exponential Backoff Retry
suspend fun fetchWithBackoff(): String {
var delayTime = 100L
repeat(3) { attempt ->
try {
if (attempt < 2) throw Exception("Fail")
return "Success"
} catch (e: Exception) {
println("Attempt $attempt failed, retrying in $delayTime ms")
delay(delayTime)
delayTime *= 2
}
}
return "Failed"
}
fun main() = runBlocking {
println(fetchWithBackoff())
}
Explanation: Exponential backoff increases the delay between retries, reducing load on external systems and increasing the chances of recovery from transient errors.
Why this is useful: Common strategy in network requests and API calls to handle temporary outages gracefully.
Improvement: Adjust maximum retries and backoff factors based on system reliability and cost of retries.
50. Combining Flows with merge
fun flowOne() = flow {
emit("One-A")
delay(300)
emit("One-B")
}
fun flowTwo() = flow {
emit("Two-A")
delay(100)
emit("Two-B")
}
fun main() = runBlocking {
merge(flowOne(), flowTwo()).collect { println(it) }
}
Explanation: merge interleaves emissions from multiple flows, emitting values as soon as they are available without waiting to pair them like zip.
Why this is useful: Useful when you want to handle events from multiple sources independently and concurrently.
Improvement: Combine with operators like filter and map to manage complex real-time data streams.
Conclusion
With these 50 examples, you now have a comprehensive understanding of Kotlin Coroutines. We covered everything from basic usage, context switching, flows, error handling, testing, parallelization, lifecycle-awareness, and more. Each concept builds on the last, helping you write cleaner, more efficient, and more maintainable asynchronous and concurrent code. By using structured concurrency, you ensure that your code is safe, predictable, and easy to reason about.
Next Steps: Consider integrating coroutines with Dependency Injection (e.g., Dagger/Hilt, Koin) and applying Clean Architecture principles. Explore advanced flow operators, experiment with backpressure handling, and leverage lifecycle-awareness on Android. With these tools, you can build scalable and resilient coroutine-based applications that are both performant and easy to maintain.










Business