Jump to content

Ilkka Nissila

Members
  • Posts

    155
  • Joined

  • Last visited

1 Follower

About Ilkka Nissila

Profile Information

  • Gender
    Male
  • Location
    Finland
  • Interests
    Documentary style photography and video, events, people, music, nature.
  • My cameras and kit
    Nikon Z8, Zf

Contact Methods

  • Website URL
    www.ilkka-nissila-photography.fi

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Ilkka Nissila's Achievements

Active member

Active member (3/5)

94

Reputation

  1. I don't quite see it that way; if social medial platforms are viewed on a computer, the browser takes up all the display area available and fits the content using the whole window, this can be vertical or horizontal or square for that matter. Basically only when the social media is viewed on a mobile device do some apps and websites default to vertical viewing, but that's a limitation of the device basically, and the typical way people default to using it. Originally instagram photos were square, not vertical or horizontal. Some social media platforms assume that a video is shot vertically on a mobile phone, and for a time it wasn't even possible to shoot in horizontal oritentation and have the social media site or app display it correctly; it would always force it to the vertical format. This, however, is incompatible with the way most news media sites present videos, which are horizontal only, mimicking TV. When these news media sites then displayed social media videos or cell phone videos, they would not be able to technically display the video as a vertical, instead they generated blurred sides to the video to turn the vertical video into horizontal. This is all a bunch of nonsense really. Vertical videos make it difficult to show the context and environment in which something is happening. This is why cinema and TV are in landscape orientation: it's better for displaying the content. Photos have been always shot both vertically and horizontally (probably most still horizontally, for the same reason as video), as the continuity can be broken in stills and one can simply flip the camera quickly to vertical and shoot some (portrait) shots that way and return to the landscape orientation to show context; in video, one can not do such flipping without causing problems to the viewer. Books and magazines naturally lend to images in portrait orientation or in some cases, square; for displaying a landscape image in large size one would need to use a double page spread, which of course is commonly done, but it does create some issues if an important part of the image is in the mid section. What's more the verticals in (still) photography were traditionally not anything remotely like 9:16 but 4:5, 3:4, and 2:3. I think seriously social media apps and sites should consider making the vertical format something like 4:5 rather than 9:16 as the latter is just not very good. It's too narrow. Device fitting inside a pocket in an extreme limitation. Clearly, if the main reason vertical videos are requested by advertising clients is people looking at their mobile phones in tube or bus, or wherever, the quality loss from cropping from 16:9 is hardly going to be visible on those tiny displays. Sure, the angle of view is narrrower but it's always going to look awkward having such an extreme aspect ratio in a vertical image. Interesting to hear that there are now high-resolution displays which show video content in public. I can't remember for sure seeing such things myself, though it's possible that I have seen it but didn't pay attention to it. I would be very surprised if those displays are as elongated as 9:16 though. It just doesn't make any visual sense to use such an extreme aspect ratio for vertical content when there is a choice to stick to 4:5 or 2:3. And when those much more suitable aspect ratios are used for the vertical content, the cropping from landscape 16:9 is less extreme and easier to manage.
  2. Sounds like random people making stuff up; the ZR has a fast read time in video mode (for a relatively low-cost mirrorless camera); it doesn't make any sense to make a video-first camera based on a sensor that is more than 10 years old and has a very slow read time. I couldn't find any reports of it on NR.
  3. AI is not a person or a human being and it doesn't share evolutionary history or biological safeguards with us. It's therefore more unpredictable what it might do. I share a lot of the concerns you express in your article. It's worth following what Bernie Sanders has been saying about AI and the impact on the workforce and that the benefits of AI should be shared among all of humanity and not concentrated in the hands of a few ultra-rich people. Although Musk has been claiming that work becomes optional in the future and there can be universal income that allows us to do anything we want but everything that the big seven companies and their billionaire owners have been doing suggests that they only care about power and getting even more rich and are not at all likely to share the riches with the people. What in their past and current behavior would make anyone believe this would ever change, without society and its political leaders forcing a change? Musk seems to think he is player 1 in a computer simulation (the world) and so everything that happens is part of a game and an adventure to him. World destroyed? No matter. Restart simulation. As long as he gets to try to get to make it to the next level (Mars) in the game, that's all that matters. We are all just extras in the game. What concerns me the most is that in the race for Mars and superpowerful AI, the Earth's environment, the climate, and its people are sacrificed and yet Mars is and will likely always remain hostile and unsuitable for human life, so all that we have could be sacrified on a useless and pointless goal by a person who doesn't have all the birds at home. The situation is a clear demonstration of why individual wealth must be limited and redistributed when it gets out of hand.
  4. The only principle that they follow is defined by their self-interest. If a law or moral principle exists which they think would help them gain more power or wealth, they use it argument why others should follow it. But they never feel the need to obey laws or ethical principles if it would be disadvantageous to their attempts to increase their power or wealth. Similar to the Russia which cries wolf when Western countries freeze their foreign assets, but do not see any problem in the looting & killing of Ukrainians. These are examples of people who are guided by only their self-interest and will do anything to gain more and more power and wealth. What is amazing is how the common people actually voted those people into positions of power.
  5. The high dynamic range (using DGO technology) in the Sony A7 V is for low to middle ISO stills when using the mechanical shutter; DGO is not used for video, and certainly there won't be any 16-stop dynamic range at ISO 3200 or 8000. The claimed 16 stops is likely achieved on a signficantly downsampled ISO 100 still image and criteria based on engineering dynamic range (SNR = 1). Do the EOSHD website and browsers used by visitors support high dynamic range photos on Super Retina XDR and other HDR screens? Otherwise, I'm not sure what the OP is looking to see. Having lower noise can't harm the image and it's up to the user to make use of the higher fidelity, or not make use of it.
  6. I believe it's just mainstream social media sites such as Facebook, Instagram, Twitter/X, LinkedIn etc., that they care about, not small niche forums on very specific topics not related to politics. I think it's safe to visit the US unless you have a written record of publicly speaking against Trump or his policies, in which case it might not currently be safe.
  7. I think in any given time window, a truly good movie is a rare thing. It's not that there are no good films being currently made, rather we remember those old films which left a lasting impression on us, and tend to forget those films which were not good. For films made in the 1980s and 1990s we remember the very best ones. For films made in the 2020s we are more likely to remember the latest ones we saw. High image quality (be it high dynamic range or resolution) cameras don't make things worse in terms of the quality of the outcome but it may be that they motivate the production to aim for greater perfection in some sense and then not realize that technical perfection is not necessarily a worthy goal on its own if it leads to losses in other areas, such as the story and dramatic intent. I think visual aesthetics have been changing with the ubiquity of the mobile phone camera and the kind of processing that phone manufacturers apply to the images by default and also the kind of post-processing that people apply to their images in instagram etc. People who have grown up on these devices are used to the auto-HDR AI look and they may think that kind of a look is normal and looks good. Cinema cameras that capture high dynamic range allow that kind of post-processing to be applied, but they also allow other options; it is how they are used that is important. As camera and TV (particularly streaming) resolution has been increasing, it is possible that to get technical perfection, the producers think all the actors need to be really beautiful with perfect skin etc. as they are shown in such great fine detail in the movie. Post-processing edits to how skinny models look in magazine covers or online, and fixing of imperfections in plastic surgery por post-processing also have lead to new aesthetics which is like a race that got out of hand, leading to ever less realistic photographs and movies. If they process everything to look a tone-mapped fake HDR image with local tonal variations everywhere and no contrast between the different elements in the scene, and all the characters are super perfect then there is a huge disconnect with reality. Classical films often had rough characters along with the beautiful, which made things look realistic even if the lighting was hard and stylized (by necessity, as the film material required a lot of light, so hard lights were used and there had to be intent). Actual HDR technology can help avoid the tone-mapped HDR look and have shadows dark all the while showing details (preserving the global contrast between parts of the image). However, how this technology is used is up to the people making the movie, of course. I have to admit that most of my favorite movies were shot on film, although I do like several which were shot on digital. I don't think shooting on film per se makes those movies look good but it may be that the filmmakers were able to choose an aesthetic (by film and lighting and costume choices etc.) and hold creative control over it with a more firm hand when using traditional techniques. This could also be why camera manufacturers have been adding "looks" and "grain" baked into the footage as options recently. They can help to lock in a certain look and the added grain prevents excessive mucking up with the image in post-processing. However, to me this seems like less than an ideal solution which would be for the team members to communicate and understand the intent and work together to achieve it. I notice there is no agreement as to what look is good online, people will have wildly differing opinions on such topics. Thus it is up to us as viewers to select our favorites and enjoy them rather than hope that every new movie follows the same aesthetics. This will never happen, of course, as there are so many opinions.
  8. Curves are available in the advanced settings when creating custom picture controls (which you can the upload as recipes).
  9. I think the mechanical systems that allow the back LCD to tilt behind the optical axis as well as opening to the left for selfie orientation are more complicated and require more parts than what Nikon is using in the ZR, and this would make the camera heavier, larger, and more expensive (would make it less attractive for many people, and it might not solve the problem it currently solves). Higher-end models will no doubt be made over time with different solutions to how the LCD turns into different orientations. The Z8 and Z9 offer a screen which does not tilt forwards (selfie orientation) but it does retain the LCD approximately on the optical axis.
  10. I could never understand the "accelerated" manual focusing, it makes things just more difficult and unpredictable. Nikon fortunately have firmware updates to most of the S-line lenses (exception: 14-24/2.8) that feature what people call linear manual focusing (I'm not really sure what is linear in it, what it does is make focus ring position and focus distance correspond to each other in a bijective relationship at least within the power cycle of the camera). What's even nicer is that you can choose how much you have to turn to achieve a given focus change, so it is adoptable for different users and needs. I think the focus by wire should never have been accelerated by default in any lens. As for the priority on autofocus, mirrorless so-called hybrid cameras and their lenses are a bit more (still) photography-oriented than video, and so the needs of the stills shooters come first in most models. Autofocus is very useful when you want consistent focus on the eye, for example, or when shooting action subjects (again, stills). For some things (such as when multiple subjects at different distances have to be sharp in the frame, and the best way to achieve this is to focus in between them) manual focus is better but manufacturers chose to prioritize ease of use than the needs of skilled users. Lenses with mechanical manual focus are of course available, natively and via adapters, for those who prioritise MF.
  11. Since RED says the colorimetry and gains are different in R3D NE vs. N-RAW, this seems to support that. Nikon traditionally has done a white balance adjustment before storing the values in the RAW file, and the raw conversion software has to know what processing has been applied in order to correct the WB. My guess is that RED might not do that (to preserve consistency across the different cameras storing R3D files) and so the colors are different in the different raw formats. RED also does not adjust sensor gain between intermediate ISO settings, as far as storing values in the raw file is concerned, apart from the two base ISOs, if I understood this correctly, and this approach is also used in the ZR R3D NE. Nikon applies different gains to the data also at intermediate ISO values when storing data in N-RAW files. So, the two formats work somewhat differently and are intended for different postprocessing pipelines.
  12. I've had good experiences with Prores 422 HQ on Z8, and am thinking the same, why don't tubers test and show the results in their comparisons? While the data rate is quite high (note that Nikon has stated they will add Prores 422 LT to the ZR in a FW update) it is a video file for which in-camera distortion and vignetting corrections available and there is some noise reduction in play as well. Cined has tested it it is included in the database and it seems from the numbers that the camera in Prores HQ mode applies less noise reduction to high ISO files than h.265. It seems like a good compromise and if the LT version comes soon it might turn minds.
  13. Panasonic is reading the sensor slower in both the normal and DR boost modes, explaining how they can get more DR out of it than Nikon in their implementation. It may or may not be the same sensor. In any case Nikon's compromise is different from Panasonic's and these are both legitimate choices. The ZR was under development before Nikon acquired RED and what RED know-how they added in this camera is likely in the firmware (and post-processing support in the R3D NE format pipeline). In cined's testing the latitude test shows better retention of color across exposure adjustments in post-processing when using the R3D NE than N-RAW and so it would seem that the RED acquisition already paid off for Nikon to become more competitive in the video arena, and this is not just marketing if it benefits users. What Nikon should do now is try to make the h.265 a bit more competitive so that more people who cannot handle the raw data rates can still benefit from the camera. It would be very costly for everyone to shoot everything in R3D NE to get benefits from the camera. I personally am looking forward to seeing some Prores 422 HQ material shot with the camera and see how that fares in comparison with the Z8.
  14. Is there somewhere where we can see an example of this problem vs. another camera with a better implementation of h.265? I think it's understandable that when a highly compressed video codec is used, there is noticeable quality loss and the manufacturer is trying to mitigate this with some algorithmic processing of the data. Is it really the case that the quality of the h.265 is worse than in a previous model from Panasonic or another camera in a similar price class, or could it be a case of increasing expectations over time as we see high-quality footage using better screens more often?
  15. How about Prores 422? Prores 422 4K at 25 fps is 433 Mbps vs. 2.3Gbps Prores RAW 6K (normal) and 3.5Gbps for Prores RAW HQ. I would think the Prores 422 on the S1II is likely to be a good intermediate sized format between RAW and h.265, at least from my Nikon experience the quality should be very good.
×
×
  • Create New...