Jump to content

Ilkka Nissila

Members
  • Posts

    88
  • Joined

  • Last visited

Everything posted by Ilkka Nissila

  1. The CPU's topside thermal pad was still stuck to the aluminium plate in the Chinese disassembly: Continuing the disassembly, there is another thermal pad right under the CPU PCB leading heat to the larger metal plate on the underside. In addition to the top- and bottomside plates, the copper layers of the PCB itself conduct heat from the components.
  2. I think the timer's purpose is two-fold in the overheating management algorithm. One is to provide the user with a degree of predictability so that they can decide whether a particular record mode will be usable for the situation at hand. The user can e.g. try 4K HQ and note that the record time is 4 min and then decide that it's safer to go with 4K regular to get the job done. If the overheat warning and shutdown are just reacting to the immediate temperature, then the user is not given any warning of the impeding ending of recording. The second reason is that heat damage depends on the duration of exposure to heat and not only the momentary absolute temperature value. Theoretically the algorithm can do the following: monitor (1) the current temperature at the three temperature sites, (2) the rate of change of the three temperatures, and (3) compare the past and current temperature history with the heat budget, (4) compare the immediate temperature with absolute limits, and (5) from the user's perspective the camera should try to deliver on the promised record time (even if temperature is rising faster than expected) and avoid a situation where the initial "promise" of a certain record time at the onset of recording is not achieved (so that the user can make educated decision of which mode to shoot in at the onset of recording). Based on these considerations it can then give the current estimate of record time left, and whether to give a warning or shut the camera down. Anyway, I am just considering this from a theoretical perspective and do not know how the algorithm actually works. The time element is an important consideration in instrument control systems and monitoring just one parameter momentarily would probably give quite erratic-appearing behavior. The temperature sensors also produce some noise so averaging the temperature can reduce that.
  3. Apple did not repair it for free the second time. They offered to fix it for a cost but this was not considered worth it because it was not a real fix to the underlying problem but just replacing the component with the same component which then probably would die again. In my opinion the failure rate of professional equipment should be such that in normal use most people would never experience it during the lifetime of the product. I would consider e.g. 1% failure rate in 7 years of daily use as limit of acceptability for a tool that manages data. I've lost data because of equipment failure - a motherboard broke and damaged hard drives, and when attaching the backup to make copies, it would break that too! (This was not an Apple computer but HP.) In my opinion these equipment should be designed in such a way that data loss (which is equal to loss of work) is extremely unusual.
  4. A friend of mine's 2011 Macbook Pro died twice from overheating (once it was fixed under warranty by replacing the motherboard but the actual problem was not solved and the problem later repeated itself) so it seems clear that this is not a good practice to run at such high temperatures. Also, it is not clear where the R5 measures the EXIF-reported temperature. It might not be the processor's internal temperature but a separate sensor inside the camera.
  5. It is normal that temperature control algorithms include not only the current temperature but also the recent temperature history that are factored into the algorithm calculating "what is the next move". It wouldn't be surprising that if you erase the data of the temperature history and other parameters by removing the battery, the algorithm has to go by without it, but it doesn't mean that it works in the intended way to protect the camera if you do that. It has been shown that the algorithm does monitor the temperature (by the freezer experiments) and others have not been able to reproduce Andrew's fridge results, in fact there are reports of temperature going down in the fridge as expected, and with good recording time left in the high-quality modes after a similar period in the fridge. Perhaps settings were different, explaining the different outcomes. Prolonged LV, focusing and IBIS activity can certainly be factors. Canon do recommend turning the camera off when not using it. The long-exposure noise in still photography has been shown to increase as the camera gets hot. Here there is discussion of the long-exposure noise and comparison with rival cameras: http://www.mibreit-photo.com/blog/canon-eos-r5-image-quality/ "The amount of hot pixels and noise in such a photo taken with the R5 will be worse, the longer you have the camera turned on in advance to taking the long exposure." So it does appear that the image quality is deteriorated in certain situations when the camera heats up.
  6. From the CPU, there are layers of static air, circuit board, static air, metal and plastic at least. There is no effective thermal conduction path to the metal chassis either from the outside or from the processor. Thus the processor cannot effectively disseminate heat and outside cooling has only a delayed effect on the internal temperature of the components.
  7. An IR thermometer only records the surface temperature, it may be that components are hotter inside. Putting the camera in a freezer for 25 min reportedly restores the camera's video recording capabilities to baseline (see experiment and observations by Jesse Evans at https://www.fredmiranda.com/forum/topic/1658635/4). Thus the camera is clearly monitoring its temperature and reacting accordingly, but after use that leads to overheating, it seems to be difficult to get the internal temperature to decline sufficiently without extreme measures. I think Canon could tweak the operation of the camera; for example, if it is sitting in a menu, there should not be any need to continue live view operation and they could program the camera so that it doesn't read and process sensor data during this time, and run the processor at lower speed to minimize heating when in a menu browsing and adjusting settings. Also there could be recommendations as to what kind of live view settings to use to maximize subsequent recording time, and so on. It can't be that the camera loses its ability to record high-quality video by merely being switched on.
  8. The non-raw video formats are resampled from the original sensor data and 6K->4K conversion can be done by interpolation. This cannot be done for raw video because raw file stores images before the RGBG Bayer pattern is converted into RGB pixels. There is no straightforward way to convert RGR BGB RGR into RG BG covering the same subsection of the image. So they have to skip data. The alternative would be to do the Bayer interpolation and create a 6K RGB image and then downsample that to 4K RGB and finally re-Bayer it to come up with the final 4K RAW. In this case there would be no aliasing problems but it wouldn't really be RAW ie. original photosite data. Another option that Nikon could have done is to offer a 1.5x cropped raw video without interpolation. In this case it would have been RAW without line skipping. There is no stupidity involved here, only different compromises to choose from.
  9. Dual pixel AF with 45MP final output image would require a natively 91MP sensor and for continuous AF purposes all of this data would have to be read and processed during focusing. Cross-type phase detection with a quadruple pixel design would require 182MP (if 2x2 are used instead of some other pattern). These things are easier to implement in a camera that isn't intended to produce high-resolution stills. Dual pixel AF is limited by the processing power available and having a high pixel count makes it more difficult. Notice that Canon's 50MP models don't have dual pixel AF either. D2H and D2Hs had an LBCAST sensor. I think the main problem wasn't the technology of the sensor but the fact that it was 4MP while Canon's was 8MP. The D2X had a 12 MP sensor but Nikon hadn't yet cracked optimal high ISO at that time (the breakthroughs came later with the D3s). In my opinion, the details of how Nikon collaborate with their partners to make the sensors for their cameras should not matter to the customer. Users should be interested in 1) image quality, 2) performance, and 3) cost. If the results are excellent that is usually enough. It is clear that Nikon's focus isn't in video but they offer video as a feature (instead of the primary function of the camera). Nikon seems to prioritise still image quality over features such as video AF. This is neither a good thing or a bad thing, every company would do well to concentrate on their strengths. I do think Canon may be more motivated to offer full frame 4K in the near future because both of their main rivals now offer it. Since Nikon are planning on releasing a high end mirrorless camera system in the future, that will surely require some kind of on-sensor PDAF which then can be offered on the DSLR side as well.
  10. Noise depends on the luminosity or number of photons detected; it is not constant but increases approximately proportionally to the square root of the signal (the luminosity or photon count). The number of distinct tonal or colour values (that can be distinguished from noise) can be calculated if the SNR is known as a function of luminosity or RGB values. From this graphs it is possible to calculate how much tonal or colour information there is in the image which is what DXOMark is estimating. You cannot estimate how many colours or tones are separable from noise by assuming that there is a fixed "noise floor". In my opinion the DXOMark analysis here is sound. Color depth is just the number of bits that are used to encode the colour values of the pixel. It doesn't consider noise.
  11. If you look more closely at DXOMark measurements (the graphs) the D5 has better dynamic range from about 3200 to 51200 than the 1DX II. Which is more important (low ISO or high ISO dynamic range), can be argued depending on the application. Typical sports shooters are shooting publication ready jpgs in the camera which mean their dynamic range is limited at that point in practice even if they once in a blue moon get the chance to use low ISO. Furthermore the tonal range (number of tones that can be separated from each other and noise) and color sensitivity (number of color values that can be distinguished from each other and noise) are greater in the D5 across the 100-25600 range than in the 1DX Mark II. For me these are very important measures of the smoothness of tones and colour gradations especially if the contrast is increased in post they determine how well the image's tonal and colour integrity hold up. To decide on which sensor is best for a given application, one needs to look at the shooting conditions and what kind of post-processing / look is preferred for the final image. The D5 isn't the ideal camera to shoot in direct sunlight due to its lower base ISO dynamic range; that much is clear. On the other hand, the 35mm full frame camera which has the best base ISO dynamic range is also made by Nikon: the D810. So they have solutions for this situation also, just in a different camera. The D500/D7500 sensor allows fast reads for high fps use, which the D7200's sensor (which scores better on dxomark for low ISO metrics) is apparently not well equipped to do. However, many users of the D500 report that they find the high ISO image quality to be better in the D500 than in the D7200 and the color neutrality is held across a greater range of ISO settings than in previous cameras. This is also true of the D5. So there are characteristics of the new sensors which are missed by dxo's overall scoring (which is mostly based on low ISO performance and ignores large parts of the elevated ISO measurements) but appreciated by photographers who use these cameras. In dxomark's graphs, the D7500 has better dynamic range than the NX500 at every ISO setting but the difference is pronounced from ISO 400 to 25600. DXO weight their overall score heavily on base ISO results which is usually not what people are using in practice unless they work in the studio or are tripod based landscape photographers. I think there is useful information in DXOMark data but you have to go into the graphs in the Measurements tab to access it. I think the cropped 4K (which is the same actual pixels crop as is used in Canon's 4K capable DSLRs) is used because it requires less processing and produces less heat than doing a full sensor read and resampling the images to 4K. I don't think it's a question of who makes the sensor so much. If they wanted to they could make a full frame 4K camera but it would cost more and most Nikon users are focused on still photography and only need some video capability on the side for occasional use. I realize you are interested primarily in video and would like Nikon to do better in that area. I am sure this sentiment is shared by many, however, Nikon's history is in still photography and they remain primarily focused on that. Users who have greater priority needs in video tend to congregate to other brands. Since Nikon is working on a full frame mirrorless camera I would expect that they will implement some form of phase-detect focus sensors in the main image sensors and at that point there will probably be more interest in using Nikons for video. But at present it seems that all the optimization that Nikon do is to get the best still image quality possible for the applications expected for each particular camera.
×
×
  • Create New...