I have this in-memory implementation of a simple Cache in Scala using cats effects.
Here is my trait:
trait Cache[F[_], K, V] {
def get(key: K): F[Option[V]]
def put(key: K, value: V): F[Cache[F, K, V]]
}
import cats.effect.kernel.Async
case class ImmutableMapCache[F[_]: Async, K, V](map: Map[K, V]) extends Cache[F, K, V] {
override def get(key: K): F[Option[V]] =
Async[F].blocking(map.get(key))
override def put(key: K, value: V): F[Cache[F, K, V]] =
Async[F].blocking(ImmutableMapCache(map.updated(key, value)))
}
object ImmutableMapCache {
def empty[F[_]: Async, K, V]: F[Cache[F, K, V]] =
Async[F].pure(ImmutableMapCache(Map.empty))
}
Is this a good enough implementation? I'm restricting my effect to Async. Can I make it even more generic to work with other effect types in my ImmutableMapCache?
What other pitfalls are there with my approach?
EDIT:
Is this a better implementation where I wrap the Map in a Cats Ref context?
import cats.effect.{Ref, Sync}
import cats.syntax.all._
class SimpleCache[F[_]: Sync, K, V] extends Cache[F, K, V] {
private val cache: Ref[F, Map[K, V]] = Ref.unsafe[F, Map[K, V]](Map.empty)
override def put(key: K, value: V): F[Unit] = cache.update(_.updated(key, value))
override def get(key: K): F[Option[V]] = cache.get.map(_.get(key))
}
blockingin theAsynccontext then why even use Async ? In creating an Immutable Cache... you are likely to be updating the variable holding the cache to the new cache on every put... but this is opening the doors for race conditions when accessed by multiple threads. You are doing a lot of things which you would absolutely not want from a Cache implementation.RefusesAtomicReferenceunderneath so it is safe to use in multithreaded context.