Jump to content

Raspberry Pi Releases an Interchangeable-lens Camera Module


androidlad
 Share

Recommended Posts

2 minutes ago, Anaconda_ said:

It's multiple. I have found a viewer for it and can see the different frames, but there's no way to export it at all from the viewer. For now, I've stopped the quest for raw as finishing the physical build seems more important - though, that's also on hold for now too.

What is the viewer that you use?

Link to comment
Share on other sites

EOSHD Pro Color 5 for Sony cameras EOSHD Z LOG for Nikon CamerasEOSHD C-LOG and Film Profiles for All Canon DSLRs

wow, that would be great. I tried raspiraw, but couldn't wrap my head around it. Instead, I created the process using this: https://www.raspberrypi.org/documentation/raspbian/applications/camera.md

Quote

All applications use the camera component; raspistill uses the Image Encode component; raspivid uses the Video Encode component; and raspiyuv and raspividyuv don't use an encoder, and sends their YUV or RGB output directly from the camera component to file.

the same also applies to raspividrgb. but then you get a .rgb file instead of .yuv.

Link to comment
Share on other sites

On 7/13/2020 at 9:32 PM, Anaconda_ said:

wow, that would be great. I tried raspiraw, but couldn't wrap my head around it. Instead, I created the process using this: https://www.raspberrypi.org/documentation/raspbian/applications/camera.md

Hello @Anaconda_, can you help? I filed a bug report on the Github page of PyDNG's developer, along with your Dropbox links to the original camera files, and received the following answer from the developer:

Quote

 

I took a look at those files, not quite sure how those would be processed. There is no metadata to specify frame size, number of frames, no presence of any notable header or EOF.

What program did you use to generate those?

Are you sure that is RAW Bayer data? YUV would suggest this is already demosiaced data and is transformed to YUV space. This is not what a DNG wrapper would be used for.

 

https://github.com/schoolpost/PyDNG/issues/24

Link to comment
Share on other sites

5 hours ago, rawshooter said:

Hello @Anaconda_, can you help? I filed a bug report on the Github page of PyDNG's developer, along with your Dropbox links to the original camera files, and received the following answer from the developer:

https://github.com/schoolpost/PyDNG/issues/24

I can’t look into this right now, but my process is as I described in the post above your question. 

When I’m home next week I can paste my python script for you to see each step. But it’s basically what I wrote above with extra lines for frame size and frame rate etc.  

Link to comment
Share on other sites

The PyDNG developer has replied, and seems to be here on the forum as well:

Quote

 

So we have similar end goals I see ....saw your post on the EOSHD forums.

The tool used to generate those files isn't outputting RAW, its already demosiaced at that point ( in either YUV or RGB space )

I haven't re-visited it yet, but the tool needed for RAW video is something like this one:
https://github.com/6by9/raspiraw

I don't think its officially been updated to work with the new HQ Camera.

likewise I don't think it is possible to do the full sensor readout at 24-30fps, I think it was said something more like 10fps. However I think in the binned mode it can do 2K at 50fps so that might be a bit more feasible.

Worth doing some more investigation, try asking on the RaspberryPi forums to see the state of raspiraw tool and if can be used with the HQ camera yet and what are the limitations.

 

 

Link to comment
Share on other sites

Thanks for the info. Raspiraw is a nightmare to figure out, and isn’t compatible with python. This is my first pi / coding project and implementing Raspiraw seems way beyond my level of understanding. I asked on the forums, but the devs approach is ‘if you need someone to hold your hand, find an alternative’ - which doesn’t exactly inspire me to learn his system. I don’t need hand holding, but reasonably easy to understand instruction wouldn’t go unappreciated.

for now I’m just sticking to h264 but the nice thing is, its easy to tinker and adjust the code even when the rest of my project is finished.... who knows when that will be. 

For the time being. My focus is on getting a 1inch monochrome oled display to show the camera preview from boot until shutdown. 

 

If if it helps, I’ll happily share my VNC details in a pm and you can try your own codes etc, on my hardware. Just promise you won’t run a self destruct script. 

Link to comment
Share on other sites

  • 1 month later...

For my project - raw video is still not working, and I've given up on it temporarily. 

I'm trying to get a small OLED display to work and show what the sensor sees, but it's being a pain. I've tried everything I can think of, and can't get anything to show on the display. Not even a 'hello'. Power goes to the display, but it doesn't seem to even turn on at this point. Until I can get that working, the rest of the project is on hold. There's no point building it into the Bolex until I can see what I'm shooting. I also want to cut any holes in the body in one go. 

I've had a busy summer though, so haven't even touched the project for a few weeks. Kids are back at school next week though, so I should have a bit more time to tinker.

Link to comment
Share on other sites

  • 4 months later...
  • 2 weeks later...
  • 4 weeks later...
  • 3 weeks later...
  • 2 months later...
14 hours ago, PannySVHS said:

On another forum someone was dreaming of a speedbooster for this to push it up to super 16 sensor size. Maybe time to rethink the classy classic EOS-M rather then waiting for a speedbooster for this little kitten.:)

There might be a way to attach a speedbooster.

This is a really interesting thread!

Link to comment
Share on other sites

18 hours ago, PannySVHS said:

On another forum someone was dreaming of a speedbooster for this to push it up to super 16 sensor size. Maybe time to rethink the classy classic EOS-M rather then waiting for a speedbooster for this little kitten.:)

How does the DR compare between them?  If you're shooting RAW at lower ISOs then DR is one of the biggest contributors to IQ I find.

Link to comment
Share on other sites

  • 1 month later...
  • 3 years later...

Another 10bit camera module for the Raspberry Pi recently got announced, this time it is an "AI camera":

https://www.raspberrypi.com/products/ai-camera/

https://www.tomshardware.com/raspberry-pi/raspberry-pi-ai-camera-review-ai-for-the-masses

Quote

Specification

12.3 MP Sony IMX500 Intelligent Vision Sensor with a powerful neural network accelerator

Framerates:

2×2 binned: 2028×1520 10-bit 30fps

Full resolution: 4056×3040 10-bit 10fps

7.857 mm sensor size

1.55 μm × 1.55 μm pixel size

78.3 (±3) degree FoV with manual/mechanical adjustable focus

F1.79 focal ratio

25 × 24 × 11.9 mm module dimensions

Integrated RP2040 for neural network firmware management

Works with all Raspberry Pi models, using our standard camera connector cable

 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

  • EOSHD Pro Color 5 for All Sony cameras
    EOSHD C-LOG and Film Profiles for All Canon DSLRs
    EOSHD Dynamic Range Enhancer for H.264/H.265
×
×
  • Create New...