Jump to content

pietz

Members
  • Posts

    136
  • Joined

  • Last visited

About pietz

Recent Profile Visitors

2,279 profile views

pietz's Achievements

Member

Member (2/5)

25

Reputation

  1. pietz

    Sony a6300 4k

    the last time i was this impressed was when the a7rII came out. The autofocus of the a6000 was the first and only one i would actually consider using as a videographer and the a6300 seems to be even better. just wow
  2. @Emanuel Sorry mate, i can't answer your two questions as I'm not an engineer. I just know that things move forward and overheating might soon be a problem of the past, as sensors get more efficient. Your entire post sounds like you are arguing that 10bit will not be available in the future of semi professional cameras. well, you might be right and i never said anything against that. My post covered two things: A) it was stupid of you to blame 4:2:0 for the banding in the mentioned example. B) in my opinion 10bit is much more important than 4:2:2 for already stated reasons. i never argued about availability of 10bit in the future and i do hope you didnt refer to me when talking about "Whining". Now comes a part thats not obvious at all and i don't blame you for being wrong again: 10bit h264 footage will be smaller and less cpu heavy compared to 8bit footage, thus also helping overheating. "WHAAAAAT, you're completely insane pietz". Thats a fair reaction. See, because most of the cameras we talk about can output uncompressed 10bit footage over hdmi, its easy to assume that its actually recorded that way. For saving it inside an 8bit h264 file it needs to be down sampled to 8bit before its actually being encoded. thats cpu heavy and skipping this step might actually result in less heat. it also results in smaller files because a higher bit depth and therefore higher color accuracy results in less truncations errors in the motion compensation stage of the encoding. this increases efficiency because theres less need to quantize. I apologize, as you successfully stepped into my trap of making this point. It sounds like magic, but its Science bitch! btw dont take my word for it: http://x264.nl/x264/10bit_02-ateme-why_does_10bit_save_bandwidth.pdf
  3. hey man, i'm no photographer, but i think this is a quite simple question to answer. lets start out at the beginning: 1) New Camera: Is 12MP enough for you? If yes, then the A7s and A7sII are excellent choices. The best, really, for what you want to do with them. You will be surprised how good they are in low light. Probably even better than your high expectation expect . If no, think about going broke for the A7rII, as its also brilliant in low light, plus also comes with IBIS which you seem to dig. Its super expensive but completely worth it and a good investment for the next decade if you ask me. 2) A7s vs A7sII: The major differences for a photographer between the A7s and A7sII are IBIS and more focus point. I would argue that the upgrade should not be worth it to you at all. I'm actually quite certain. The A7s is so good in low light already, that you probably dont need IBIS to capture more light to begin with. And since your aim is astro photography (where objects usually dont move so quickly) you dont need those extra focus points anyway. based on your question i would easily recommend the A7s. the only thing holding you back should be its resolution of 12MP. save the extra cost of the A7sII for other equipment. I dont know anything about astro photography, but i thought that especially there a high res sensor is super important. but if the A7s's made it to your short list, thats probably not the case. to my understanding pictures taken with the A7s are tack sharp at native resolution and wait with your purchase another 2-4 weeks. the price of the old A7s will probably go down even further. and dont forget to send me some sweet sweet wallpapers, ok?
  4. I believe that the jump from 8bit to 10bit is far more important than 4:2:2 instead of 4:2:0. The bump to 4:2:2 increases the theoretical gain of picture information by 1,3x, whereas 10bit increases it by 64x more information. The A-B-C example that Emanuel posted doesn't prove the opposite for quite a few reasons: 1. When testing the impact of a certain change or setting you can only vary one factor to make assumptions regarding that setting. But as it has been pointed out it not only changed from 8bit 4:2:0 to 8bit 4:2:2, but the entire system that recorded the scene changed including the codec thats being used. thats a massive flaw in the test and therefore doesn't prove anything regarding 4:2:0 vs 4:2:2. 2. Better color subsampling only increases accuracy of chroma information meaning that a black and white picture looks identical in 4:2:0 vs 4:2:2. The ABC example still shows banding even when converted to black and white, meaning the banding isn't caused by the color subsampling. 3. if 4:2:0 were to blame for the banding in the picture then the 4:2:2 example would show the same kind of banding on the horizontal line of the vignette circle. why? well, because 4:2:2 and 4:2:0 have the exact same amount of information on any horizontal line of pixels. its only the vertical chroma resolution thats improved. Quite opposite to Emanuels opinion I think people are way to focused on 4:2:2. Its really just a leftover from interlaced footage times that only increases new picture information by a tiny bit. if something bothers you about 4:2:0 it should also bother you while looking at horizontal lines from 4:2:2 footage as it has the same chroma resolution. 10bit on the other hand is HUGE. companies went from 10bit raw to 12bit raw to 14bit raw and its still going. Now, in my personal opinion we have reached what makes sense in that matter and everything above is just marketing, but since 8bit is the lowest amount of color we need to not see any banding, it makes A LOT of sense to go at least one step higher to have a little bit of room for grading. 8bit only accounts for 0,02% of all the color that 12bit includes and if people don't even think thats enough, we might see a bunch of reasons to upgrade to 10bit.
  5. Andrew builds some amazing stuff and I totally think he should make some money with the time he invests into EOSHD, but isnt GH4 LOG Converter total bullshit in theory? Log gives benefits in terms of higher dynamic range and smooth highlight roll of, but these two factors only play into account if they are applied between capturing the light and saving it in a video file. after that the dynamic range is set and so is the definition in highlight roll of. taking regular footage and making it flat doesnt give you any more information that you had in the first place. quite the opposite: flattening footage and making it pop again, means loosing information in the process. Yes, I agree the results look good in the examples, but one should be able to achieve this based on the original footage directly. What am I missing?
  6. wow, this makes complete sense. thank you for your time! so why is everybody so psyched about 4:2:2 these days? sure its twice as much color information compared to 4:2:0, but i never looked at footage (even graded material) saying "4:2:2 or 4:4:4 would have helped to make this look any better." if the lack of color information bothers you in 4:2:0, it should also bother you in the horizontal of 4:2:2. why favor the quality of vertical color information over horizontal? and yet all editing codec come at least with 4:2:2. i dont get it. i think it makes a lot of sense to start delivering 10bit to semi professional cameras internally, but 4:2:2 i can easily live without. from what ive experienced theres just no good point in using it...
  7. knowing how chroma subsampling works i often ask myself why 4:2:2 is the (semi) professional standard in case 4:4:4 is not an option. it bothers me to imagine that the color information will be full in vertical terms but only half in horizontal direction. and why does everybody use 4:2:2 and nobody ever talks about 4:4:0? why is the vertical color information more important than the horizontal? to my understanding it should be the other way around. since most of the motion in a film is horizontal, shouldnt be the horizontal color information be full instead of the vertical? somebody here that can elaborate? would be highly appreciated. -Pietz
  8. ​and im 70% sure the both of you don't understand how percentage works.
  9. ​ive been saying this for years and always get weird looks all over the place, but the relationship of Sony and Panasonic is strikingly similar to RED and ARRI. One tries to innovate wherever they can, which always looks great on paper. the other focuses on complete reliability and only uses what works 100%. If moire isnt a huge deal on the a7r II there is another benefit opposed to the A7s. native iso at 3200 is annoying as f*ck and while its awesome to have nightvision in our camera, we wont need it more then 2% of the time. Shooting in bright daylight and having to use iso 3200 happens a whole lot more often. using variable NDs with these settings is almost impossible because all of them suck at high ND values. and fixed NDs are just as annoying. native iso at probably 800 is a lot better to deal with if you want use S-Log
  10. i disagree with those saying that "especially now Panasonic needs to deliver V-Log for the GH4". We're lucky that the update is rumored to drop very soon, because if anything Panasonic needs us to buy their next camera. And thats not going to happen if the GH4 takes away another aspect that the GH5 could show off. They will only make us buy their next product if they amaze us. V-Log is definitely the first thing on my list, closely followed by 10bit. the latter got even more important now that the A7r II doesnt have it. Personally i never cared for 4:2:2 and dont see why this is such a big deal to some, but 10bit prores or cineform would be a great start. Since the announcement of the A7r II IBIS is also a must now. this will mess up Panasonics entire lens line up with OIS but thats a shot they have to take. Lastly a 4k multi aspect sensor to come as close as possible to a larger sensor format. since this results in a 12MP senor this will also help in low light.The problem with sensors is that there are not so many great ones out there and you have to take what you get. But if we're honest: Panasonic is done, they wont have a market after this camera drops and after the A7s II even less. those Samyang lenses are still a lot cheaper than MFT glass surprisingly enough so for filmmakers the cost of lenses isnt even a downside with the A7 series. oh man Pana, you gotta strike hard and start delivering RELEVANT firmware updates frequently. id love to work for them in hard times like these.
  11. ​thanks for your time john and no, the crop seems to be native pixels, just like the regular 4k mode. even for anamorphic shooters, this just doesnt sound like big news at all then. sorry, i dont wanna spread bad attitude, but seriously whats all the fuzz about? we were able to shoot 4:3 video in the same resolution before. i challenge everybody to see a difference between 25p and 24p. like one of us is shooting a hollywood picture with the gh4 that needs to be 24p... and even it you needed 24p it was all there you just needed to crop the 4k width from 3840 to 2880. and since you need to adjust the footage in post eitherway, its not a big deal at all. Panasonic could at least given you anamorphic shooters destreched live preview and footage...
  12. i read the changelog and i understand whats in it, but what exactly is now possible with the 2.2 firmware that wasnt possible before? im asking because to my understanding the new anamorphic recording is not destreched in live view nor is the actual recorded footage. so whats better now than shooting in 4:3 photo mode or just using 4k and cropping the sides? i feel extremely stupid. i must be overlooking something because everybody is so psyched, but what?
  13. the human eye can see between 2-10 million different colors depending who you ask and what study you put believe in. thats between 7 and 7,75 Bit of different colors per channel. if you think you can distinguish 14bit original footage in an 8bit video uploaded to youtube, you must have super powers, because its just not possible. its impossible because not only has the video playing a higher number of colors that you can see, but it also is the original brought down to the 8bit color space. that video you're referring to looks so absolutely disgustingly oversaturated that it hurts my eyes. in the very first shot when her head covers the sun, she has the same color as a pig that has been eating too many pumpkins in bright sunlight. there isnt even any color separation in her face. its just different shades of pumpkin-pig-skin-color. i find it hard to believe how one finds this look attractive but thats personal opinion i guess. try it yourself: download the clip, bring it to a nle, pull the saturation all the way and than bring it up until it looks "right" without looking at the numbers. i got to -24%. taking it back to the original afterwards shows you how oversaturated it is. and youll also see that there is no color separation in the color tones.
  14. this doesnt sound like something that panasonic would do. i hope. however the wording sounds very solid and not like "the gh5 will have ibis!!1 yolo" -trash rumors. i would most definitely switch system if its a paid update. even if its just a few bucks. the competition is extremely high these days, with lots of other companies to switch to. Panasonic has never been "great" about firmware updates (not terrible either) and going this route would make me sell my gh4 immediately and get a NX1. (im on the edge anyway) this kind of marketing and selling scheme disgusts me. Panasonic said themselves that the feedback and sales on the gh4 are way better than expected. why not give a little bit back to the users, as an action of good faith? that way theyd be building trust for future cameras, which is something the need badly. lets face it, we all know that the NX2 will be better than the GH5, so trust, believe and good faith is something they should be aiming at. i have trouble understanding, why so many companies dont get that. but until its official ill be leaning back and sipping ice tea
  15. every update bringing new features is a good update, but i clearly dont agree with Panasonics aproach here. Anamorphic recording on a device like the GH4 is still such a niche kind of feature. i dont think more than 2% of all gh4 owners will actually use it. plus, the implementation is rather poorly without a destretched live preview. and lets not forget, anamorphic recording was possible before and just meant some adjustments on the computer. so why not work on something that a majority of people actually need, like putting more pressure and time into V-Log. with the history and knowledge of Panasonic they could have delivered this months ago. What about punching in while recording? or a 21:9 4,5k recording? that features less data than the photo mode 4:3 4k recording and would make so many people happy. why not also deliver a regular 3k recording that covers the entire sensor and doesnt push the crop further like 4k mode? the gh4 is a beast of a camera but there are still so many things they could work on... on a different topic: does anybody have a good article about the physics of anamorphic recording? everybody always talks about the "awesome look" thats not so generic, but i would like some actual technical explanation of the matter. why it actually looks to different and where the horizontal flaring comes from. thanks
×
×
  • Create New...