I understand what you're saying, but would suggest that they are only simple to deal with in post because they've had the most work put into them to achieve the in-camera profiles.
It is widely known that the ARRI CEO Glenn Kennel was an expert on film colour before he joined ARRI to help develop the Alexa. Film was in development for decades with spectacular investment into its colour science prior to that, so to base the Alexa colour science on film was to stand on the shoulders of giants.
Glenns book is highly recommended and I learned more about the colour science of film from one chapter in it than from reading everything on the topic I could find online for years prior:
Also, Apple have put an enormous effort into the colour science of the iPhone, which has now been the most popular camera on earth for quite some time, according to Flickr stats anyway.
I have gone on several trips where I was shooting with the XC10 or GH5 and my wife was taking still images with her iPhone, and so I have dozens of instances where my wife and I were standing side-by-side at a vantage point and shooting the exact same scene at the exact same time. Later on in post I tried replicating the colour from her iPhone shots with my footage and only then realised what a spectacular job that Apple have done with their colour science - the images are heavily processed with lots and lots of stuff going on in there.
and now that I have a BMMCC and my OG BMPCC is on its way, I will add that the footage from these cameras also grades absolutely beautifully straight-out-of-camera - they too (as well as Fairchild who made the sensor) did a great job on the colour science. The P4K/P6K footage is radically different and doesn't share the same look at all.