phantom :: (Functor f, Contravariant f) => f a -> f b phantom x = () <$ x $< ()
Or, more specifically, the annotation to it:
Contravariantthen by the time you factor in the laws of each of those classes, it can't actually use it's argument in any meaningful capacity. This method is surprisingly useful. Where both instances exist and are lawful we have the following laws:
fmap f ≡ phantom,
contramap f ≡ phantom
fmap f ≡ contramap f ≡ phantom, why do we need
Functor instances? Isn't it handier to do this thing the other way: create an instance for one class
Phantom, which introduces the
and then automatically derive instances for ?
class Phantom f where phantom :: f a -> f b
We will rid the programmer of the necessity to rewrite this
phantom twice (to implement
contramap, which are
const phantom, as stated in the annotation) when implementing instances for
Functor. We will allow writing one instance instead of two! Besides, it seems nice and idiomatic to me to have classes for all 4 cases of variance:
Invariant (yet, some suggest using
Profunctor interface instead of
Also, isn't it a more efficient approach?
() <$ x $< () requires two traverses (as much as we can traverse a phantom functor...), as long as the programmer might carry this transformation out a bit faster. As far as I understand, the current
phantom method can't be overridden.
So, why didn't the library developers choose this way? What are the pros and cons of the current design and the design I spoke of?