Leaderboard
Popular Content
Showing content with the highest reputation on 07/27/2017 in all areas
-
Your ideal NX1 Settings
Pavel Mašek and 3 others reacted to erre for a topic
Hi guys! New user here from Italy. Next week, after many thoughts, I'm going to have the NX1 and I'm pretty excited. This forum was critical to my final decision and I have to thank you for all the opinions and settings that you've posted through all these years about this amazing camera. I've just found a short movie from Vimeo filmed with the NX1 and I think it looks great (sadly I don't know the author's settings).4 points -
Well... ? ... The speed booster is yet another piece of gear I have... And I have concluded that if I'm gonna use it on the GH5, any size/weight advantage is pretty much null and void. Nice to know I'm not alone... But not really. I'm sort of looking for the silver bullet here. Yeah, I'm starting to think I just need to duplicate my Canon collection, with a m43 spin. So the above mentioned, plus the 35-100mm f2.8... keep the Nocticron... It's a special lens. Get the SLR Magic compact... Add some diopters. Maybe add the 100-400mm F4, and call it a day. That's it then... GH5 done... I think ?2 points
-
NETFLIX: Which 4K Cameras Can You Use to Shoot Original Content? (missing F5! WTH?!?)
EthanAlexander and one other reacted to BTM_Pix for a topic
Been doing some extensive research myself overnight. Its called Terre d'Hermès apparently and according to the manufacturer its "a symbolic narrative revolving around a raw material and its metamorphosis. A novel that expresses the alchemical power of the elements. A water somewhere between the earth and the sky. A journey imbued with strength and poetry. Woody, vegetal, mineral." I'll be dousing myself in this and inviting Sofia over for what I believe the youngsters call "Netflix and chill". Or "Netflix from an approved list of 4K capable originating cameras and chill" as its known on here.2 points -
Schneider ES Cinelux 2x (Custom Single Focus Test)
Bold reacted to Dr. Verbel' for a topic
Schneider ES Cinelux 2x (Custom single focus) | Carl Zeiss Jena Pancolar 50mm F:1.8 | Sony a6300 Some shots with varyND. Early test, needed to learh how to achieve best result with this setup DIY-body... and few stills at Helios 44-2 F:2 and... ...Carl Zeiss Jena Pancolar 50mm F:1.81 point -
My Thoughts Canon 1DXMK2 vs Panasonic Lumix GH5
EthanAlexander reacted to DBounce for a topic
The 1DXMK2 has great color straight out of the camera, there is no doubt of this. However, the GH5 has much more gradable footage when you get to post. Both are capable. For stills I give the edge to the Canon, but for video I give the edge to the GH5... it packs so much into that small body, that it boggles the mind. But in all honestly, either would do an admirable job in either role. They are truly great cameras. My 1DXMK2 is not going anywhere anytime soon and neither is my GH5. But on another note: my lens choices were fairly easy with the Canon... get the 24-70 f2.8 Lii, the 70-200 f2.8Lii, the 16-35mm f4, and the 50mm 1.2. Those are my main lenses. But with the GH5 I am more conflicted. I wanted this to be a cinema cam, but find those Veydras heavy for everyday duty. I love the 42.5mm Nocticron, but it is a prime and I kind of miss my zooms. I have yet to settle on what is the "ideal" setup with the GH5. Any thoughts?1 point -
I'm happy you like the footage/edit. I never noticed the audio until later, probably as I had an AC running next to me. Framing was sort of adhoc. While it is a lot more challenging to use this setup, I really feel like it might be worth the effort to master it. Initially, I thought the SLR Magic was a gimmick, but I believe it does bring something extra to a production. The GH5 is a real gem. I'm loving it as a creative tool (the camera, not me... or maybe both?). The best ergonomics of any of the hybrid solutions that I have used to date. Can't wait to see what the new firmware will bring.1 point
-
My Thoughts Canon 1DXMK2 vs Panasonic Lumix GH5
EthanAlexander reacted to fuzzynormal for a topic
Give Germain Lalot a GH5 and a compelling subject and he'd make another film just as good, no question. One things for sure, his footage of Vanuatu is ridiculously better than the SD footage I shot while I was there in 1999!1 point -
Put an ebay arca grip or sugru on the front of the camera and more sugru on the back where your thumb fits. Problem fixed. Did you read about Ming Thein's problems with Leica? I think 7 out of 8 copies of the same lens de-centred or with a malfunctioning iris? Have you seen the posts from Leica owners who have to wait months for simple repairs? Honestly, so not a problem for me... I really hope they don't: https://www.l-camera-forum.com/topic/264856-leica-repair-wait-times/ https://petapixel.com/2016/04/11/30-years-photography-ive-never-service-experience-like/ ..Initially I believed that perhaps my eyes entering their 4th decade were responsible for these missed shots, but it turns out these missed shots were due to a badly aligned focusing mechanism along with a lens that was also focus shifting.This was a service issue that required shipping the lens and camera back to Leica to calibrate for an additional $300 and a month away, out of my hands... when it came time for the Monochrom to be serviced, I’ve not been able to get it back within anything close to a reasonable amount of time. The Leica Monochrom camera has been undergoing service with Leica for literally six months now. Say that with me… six months… SIX MONTHS! It turns out the problem is due to a defective sensor cover glass, and it is not just me that is inconvenienced by this. I understand that this may be a difficulty for Leica that is not entirely their fault as they outsource production of sensors to another company. But what Leica absolutely owns about this is their relationship with the customer. This product was clearly defective and Leica’s solution is simply to keep people’s investments for half of a year or more. No offers were made to reimburse costs, or offer of a loaner camera until I raised hell 5 months into the process. Nor, to my knowledge, were any offers made to treat Leica’s customers with respect by offering refunds or exchanges for working models. https://blog.mingthein.com/2014/05/06/qc-and-sample-variations/ My experience with hand-assembled cameras and lenses – namely, Leica – has been less than stellar. I’ve gone through six samples of the 50/1.4 ASPH; one was mechanically defective – it threw an aperture blade on day two – one was astigmatic; two were just soft – one I suspect had a slightly too-short intermediate helicoid, the other perhaps elements that skewed slightly in all directions; only the first and last were ‘good copies’ – i.e. elements were aligned, the mechanical bits didn’t break, and the lens generally performed to expectations and in line with the theoretical MTF chart**. The thing is, short of the broken aperture – if you didn’t handle more than one sample, you wouldn’t know that the one you have was defective. Leica quality control levels are waaaay below those of Panacanikony as evidenced by, say, Lensrentals.com reports. So overall no, I'm not impressed by Leica once sending some filters that they bought for a fraction of the store price. And more importantly, Contax was always the cool 35mm rangefinder company...1 point
-
Thanks for commenting and giving feedback! Different kind of videos and different kind of songs. In this I wanted to leave time for the images. Long transitions, slow and more poetry like. The other one is definetly more narrative driven. To be fair, this video was just a bit too much for our micro crew to handle in the given time. I would like to have had one more shooting day for this, but can't get everything. Both videos are made with micro crew and micro budget (or no budget). But that is not a reason for good or bad video, not at all. Just a fact.1 point
-
Camera resolution myths debunked
UncleBobsPhotography reacted to meanwhile for a topic
I *do* have a background in exactly those subjects. Seriously. And, no, your work history on your linkedin page does not show any evidence of competence in anything other than basic C programming. And while I have't treated you in a way that flatters your ego, I have been perfectly polite. And, no, I haven't used any ad homs. Here's the biggest point of all, which I was holding back to spare you embarassment: raw is a specialized recording media, not one for transmission - that's what jpeg is for. You make raws so you can make jpegs of different qualities, fiddle with the image etc. The people whose work you have grossly misunderstood are claiming it as a possible replacement for jpeg. So where on earth did you get the idea that it could replace raw??? The point of raw is that it is lossless and includes more data than the eye necessarily needs to see. The work you so grossly mis-understood is a form of lossy compression. It is not, by definition, a potential raw replacement. For you not to understand this shows you not only don't understand the technology you are talking about but that you don't understand what raw is. On a video forum where you have made almost 2000 posts... (And no, I don't feel like sharing my real world identity with someone who, to say, the least, seems like a compulsive balloon juice drinker.)1 point -
Camera resolution myths debunked
EthanAlexander reacted to jcs for a topic
Have you noticed how @HockeyFan12 has disagreed with me politely in this thread, and we've gone back in forth in a friendly manner as we work through differences in ideas and perceptions, for the benefit of the community? The reason you reverted to ad hominem is because you don't have a background in mathematics, computer graphics, simulations, artificial intelligence, biology, genetics, and machine learning? That's where I'm coming from with these predictions: https://www.linkedin.com/in/jcschultz/. What's your linkedin or do you have a bio page somewhere so I can better understand your point of view? I'm not sure how these concepts can be described concisely from a point of view solely from chemistry, which appears to be where you are coming from? Do you have a link to the results of your research you mentioned? It's OK if you don't have a background in these fields, I'll do my best to explain these concepts in a general way. I used the simplest equation I am familiar with, Z^2 + C, to illustrate how powerful generative mathematics can create incredibly complex, organic looking structures. That's because nature is based on similar principles. There's an even simpler principle, based on recursive ratios: the Golden Ratio: https://en.wikipedia.org/wiki/Golden_ratio. Using this concept we can create beautiful and elegant shapes and patterns, and these patterns are used all over the place, from architecture and aesthetic design to nature in all living systems: I did leave the door open for a valid counter argument, which you didn't utilize, so I'll play this counter argument which might ultimately help bridge skepticism that generative systems will someday (soon) provide massive gains in information compression, including the ability to capture images and video in a way that is essentially resolution-less. Where the output can be rendered at any desired resolution (this already exists in various forms, which I'll show below), and even any desired frame rate. Years ago, there was massive interest in fractal compression. The challenge was and still is, how to efficiently find features and structure which can be coded into the generative system such that the original can be accurately reconstructed. RAW images are a top-down capture of an image: brute force uncompressed RGB pixels in a Bayer array format. It's massively inefficient, and was originally used to offload de-Bayering and other processing from in-camera to desktop computers for stills. That was and still is a good strategy for stills, however for video it's wasteful and expensive because of storage requirements. That's why most ARRI footage is captured in ProRes vs. ARRIRAW. 10- or 12-bit log-encoded DCT compressed footage is visually lossless for almost all uses (the exception being VFX/green-/blue-screen where every little bit helps with compositing). Still photography could use a major boost in efficiency by using H.265 algorithms along with log-encoding (10 or more bits). There is a proposed JPG replacement based on H.265 I-frames. DCT compression is also a top down method which more efficiently captures the original information. An image is broken into macro blocks in the spatial domain, which are further broken down into constituent spectral elements in the frequency domain. The DCT process computes the contribution coefficients for all available frequencies (see the link- how this works is pretty cool). Then the coefficients are quantized (bits are thrown away), and the remaining integer coefficients are further zeroed and discarded below a threshold and the rest further compressed with arithmetic coding. DCT compression is the foundation of almost all modern, commercially used still and video formats. Red uses wavelet compression for RAW, which is suitable for light levels of compression before DCT becomes a better choice at higher levels of compression, and massively more efficient for interframe motion compression (IPB vs. ALL-I). Which leads us to motion compression. All modern interframe motion compression used commercially is based on the DCT and macroblock transforms with motion vectors and various forms of predictions and transforms between keyframes (I-frames), which are compressed in the same way as a JPEG still. See h.264 and h.265. This is where things start to get interesting. We're basically taking textured rectangles and moving them around to massively reduce the data rate. Which leads us to 3D computer graphics. 3D computer graphics is a generative system. We've modeled the scene with geometry: points, triangles, quads, quadratic & cubic curved surfaces, and texture maps, bump maps, specular and light and shadow maps (there are many more). Once accurately modeled, we can generate an image from any point of view, with no additional data requirements, 100% computation alone. Now we can make the system interactive, in real-time with sufficient hardware, e.g. video games. Which leads us to simulations. In 2017 we are very close to photo-realistic rendering of human beings, including skin and hair: http://www.screenage.com.au/real-or-fake/ Given the rapid advances in GPU computing, it won't be long before this quality is possible in real-time. This includes computing the physics for hair, muscle, skin, fluids, air, and all motion and collisions. This is where virtual reality is heading. This is also why physicists and philosophers are now pondering whether our reality is actually a simulation! Quoting Elon Musk: https://www.theguardian.com/technology/2016/oct/11/simulated-world-elon-musk-the-matrix A reality simulator is the ultimate generative system. Whatever our reality is, it is a generative, emergent system. And again, when you study how DNA and DNA replication works to create living beings, you'll see what is possible with highly efficient compression by nature itself. How does all this translate into video compression progress in 2017? Now that we understand what is possible, we need to find ways to convert pixel sequences (video) into features, via feature extraction. Using artificial intelligence, including machine learning, is a valid method to help humans figure out these systems. Current machine learning systems work by searching an N-dimensional state space and finding local minima (solutions). In 3D this would look like a bumpy surface where the answer(s) are deep indentations (like poking a rubber sheet). Systems are 'solved' when the input-output is generalized, meaning good answers are provided with new input the system has never seen before. This is really very basic artificial intelligence, there's much more to be discovered. The general idea, looking back at 3D simulations, is to extract features (resolution-less vectors and curves) and generalized multi-spectral textures (which can be recreated using generative algorithms), so that video can be massively compressed, then played back by rendering the sequence at any desired resolution and any desired frame rate! I can even tell you how this can be implemented using the concepts from this discussion. Once we have photo-realistic virtual reality and more advanced artificial intelligence, a camera of the future can analyze a real world scene, then reconstruct said scene in virtual reality using the VR database. For playback, the scene will look just like the original, can be viewed at any desired resolution, and even cooler, can be viewed in stereoscopic 3D, and since it's a simulation, can be viewed from any angle, and even physically interacted with! It's already possible to generate realistic synthetic images from machine learning using just a text description! https://github.com/paarthneekhara/text-to-image. https://github.com/phillipi/pix2pix http://fastml.com/deep-nets-generating-stuff/ (DMT simulations ) http://nightmare.mit.edu/ AI creating motion from static images, early results: https://www.theverge.com/2016/9/12/12886698/machine-learning-video-image-prediction-mit https://motherboard.vice.com/en_us/article/d7ykzy/researchers-taught-a-machine-how-to-generate-the-next-frames-in-a-video1 point -
do you think it is possible with gh5? This video from canon just blow my mind. Colors and resolution is unreal.1 point
-
Hey Kidz! Just keep the different image circles in mind. Tevidons above 16mm should be fine, the 10 and 16mm are tricky for the G85, as I´ve been posting a few days ago in this thread. Glenns Fujinon 12.5mm seems one of the nicest lens gems in C-mount. I am glad he shared his knowledge about it on this forum. For the Tevidons I wouldnt pay more than 100 USD though, as some sellers really push the prices too much. I paid around 30USD for my Tevidon 16mm 1.8, which is still waiting for an adapter. Tevidons need a special Tevidon to either cmount or m43 adapter.1 point
-
Glenn, you made me check out Edward G. Robinsion. Nice find! That Netflix mogul in Edward G. Robinson impression is a great idea for a little scene itself!1 point
-
Camera resolution myths debunked
jonpais reacted to EthanAlexander for a topic
Unfortunately, I've been running into more and more clients who think that if I don't shoot 4K they won't get a nice image. One recent example was a client who had been doing the company's youtube videos herself and had a ~$600 camcorder that had "4K" plastered on the side... I didn't bother explain that I could get a better image with 10 bit 4:2:2 etc. for the same reason having a physically larger camera already primes clients to think my work will look good. So I just shot 8 bit 4K on my "large" camera (FS5) and everyone loved it (Thanks Pro Colour!). On the other hand, I've still got plenty of clients who trust me to deliver the best image based on my past work and never complain if something isn't 4K. I love these clients1 point -
I second that, given that OP has the lenses. Sell your current E-M5 (body) and get a mark 2 (body only) - in fact, get 2 of them, that way you can shoot multiple angles without having to reshoot. The only other advice I have... because you are new to this.... look to other documentaries as a guide... Watch for color, composition, framing - just like photography... but also look for rhythm, scene switching and B-roll shots. Good luck. P.S. After getting an idea of how to shoot and edit.... do some pre-planning so you know what you want to shoot so the editing is less grueling.1 point
-
Stills to video, need help
EthanAlexander reacted to maxotics for a topic
Get a used C100 and something like the Canon EF-S 17-55mm f/2.8. Even without much light, it will shoot beautiful images indoors. Use your MFT camera outside for b-roll.1 point -
NX1 exposure
Nicholson Ruiz reacted to Marco Tecno for a topic
These shots are very nice (and those guys look absolutely groovy)1 point -
Some of you may interested, i tested my Feiyu MG V2 gimbal with my 5D Mark II + 24-105 F4 L The max payload for the gimbal is 1630g, the camera + lens is 1520, and it worked perfectly. I i didn't even tuned to motors yet. I used ML RAW with 1856x928 at 23,976 fps. The focus was fixed (the AF is terrible with live view on the 5D2) The gimbal was calibrated with the lens set to 35mm. After that at the 24/50/70/105mm there was no recalibration the motors handled perfectly the unbalanced setup. Colorgraded in Davinci Resolve with Cinelog and Ektar 100 Advanced LUT from hyalinejim 5D2 on Feiyu MG V2 gimbal1 point
-
Camera resolution myths debunked
EthanAlexander reacted to jcs for a topic
Balloon juice? Do you mean debunking folks hoaxing UFOs with balloons or something political? If the later, do your own research and be prepared to accept what may at first appear to be unacceptable- ask yourself why you are rejecting it when all the facts show otherwise. You will be truly free when you accept the truth, more so when you start thinking about how to help repair the damage that has been done and help heal the world. Regarding generative compression and what will someday be possible: have you ever studied DNA? Would you agree that it's the most efficient mechanism of information storage ever discovered in the history of man? Human DNA can be completely stored in around 1.5 Gigabytes, small enough to fit on a thumb drive (6×10^9 base pairs/diploid genome x 1 byte/4 base pairs = 1.5×10^9 bytes or 1.5 Gbytes). 1.5 Gbytes of information accurately reconstructs through generative decompression, 150 Zettabytes (10^21)! (1.5 Gbytes x 100 trillion cells = 150 trillion Gbytes or 150×10^12 x 10^9 bytes = 150 Zettabytes (10^21)). These are ballpark estimates, however the compression ratio is mind-boggling. DNA isn't just encoding an image, or a movie, it encodes a living, organic being. More info here. Using machine learning which is based on the neural networks of our brains (functioning similar to N-dimensional gradient-descent optimization methods), it will someday be possible to get far greater compression ratios than the state of the art today. Sounds unbelievable? Have you studied fractals? What do you think we could generate from this simple equation: Z(n+1) = Z(n)^2 + C, where Z is a complex number? Or written another way Znext = Znow*Znow + C? How about this: From a simple multiply and add, with one variable and one constant, we can generate the Mandelbrot set. If your mind is not blown from this single image from that simple equation, it gets better: it can be iterated and animated to create video: And of course, 3D (Mandelbulb 3D is free): Today we are just learning to create more useful generative systems using machine learning, for example efficient compression of stills and in the future video. We can see how much information is encoded in Z^2 + C, and in nature with 1.5Gbytes of data for DNA encoding a complete human being (150 ZettaBytes), so somewhere in between we'll have very efficient still image and video compression. Progress is made as our understanding evolves, likely through advanced artificial intelligence, to allow us to apply these forms of compression and reconstruction to specific patterns (stills and moving images) and at the limit, complete understanding of DNA encoding for known lifeforms, and beyond!1 point -
1080 vs. 4K: What is REALLY necessary?
Snowfun reacted to TheRenaissanceMan for a topic
Does this board even have moderators?1 point -
1080 vs. 4K: What is REALLY necessary?
jonpais reacted to fuzzynormal for a topic
I've traveled to over 60 countries. There are rual neighborhoods in my home state of Michigan that easily match the shit-hole-ness of the third world.1 point -
Idk, I think Mr. Netflix may tell me, "Hey Kid, if only your not horrible movie had pretty dames like Marty's movies, you may have something, see." Think bad Edward G. Robinson impression.1 point
-
For now just know all of your options, study up on what's possible, and why some people choose certain options, don't invest yet. Use your camera very bare bones for the time being until you know you can use it this way, but you see exactly how something you can afford would be helpful. A pro can use just a camera and a lens. For instance, a variable nd makes sense, but the good ones cost money, and it can be another thing to fidget with, and make the image worse, and I very rarely want to change exposure within a shot. Maybe do mainly photography at leat at times - much easier to manage in almost every way than video, and your camera is better at photos than video. Obsess over it a little And yeah, learn more from the real pros than from youtube gear heads Good luck1 point
-
Best cheap extras for starter camera, best techniques to master
Gregormannschaft reacted to meanwhile for a topic
For anyone finding this by google in the future - DSLR Video Shooter Very useful on lights as I think jonpai said, also ok on basic lighting technique and some other gear https://www.youtube.com/channel/UCMmA0XxraDP7ZVbv4eY3Omg (Note: Apurture seem to be very good on CRI. If not spelling) Dugdale: I found his website completely useless. There may be useful info on basic technique there, but if so it is buried in a hotchpotch of other stuff. Focal length reducers: Much less useful than I hoped. They reduce FL by 0.7 before the m43 35mm equivalence factor comes in. So a 50mm takumar becomes about a 70mm lquivalent ens, and you can forget finding lenses to use as real wide angles. Additional lens option At least one Oly m43 lens seems to have real manual focus - the Pro 12-40mm . True of the other pro lenses as well???1 point -
Magic Lantern Raw Video
cameraeye reacted to EthanAlexander for a topic
No, I totally thought I read 5D... I guess since ML lives on the SD in the other slot there's no size limitation to the CF cards1 point -
Best cheap extras for starter camera, best techniques to master
samuel.cabral reacted to fuzzynormal for a topic
Sure, you can do that. If you want to be awesome at making cat videos. (Street scenes are hardly ever awesome) If that's what you want, no problem. It's fine. If you really want to make films and movies however, TRUST ME: Do. Not. Wait. Just go do it and try your hardest. Don't let ignorance about technical details slow you down. To hell with ignorance. A little naïveté can be a blessing! Never think you can't accomplish a scene or shot simply because you don't have the best "x" on the market. Compelling stuff can be made with the "y" stuff you already have. The simple ambition to go out and make real stuff will leap frog you over everyone else on the planet playing with their cameras rather than being filmmakers with their cameras. I don't know. Maybe you just want to experiment with gear. Most here are the same, including me. All I can tell ya is that in filmmaking solutions follow creativity and not as often visa versa. All that said, start with a cheap LUMIX M43 cam, a cheap Chinese speed-booster, and three cheap f2.8 manual prime lenses. 24mm, 50mm, 85mm. --Outboard audio recorder with a decent boom mic (and operator), and a modest collection of Aputure LED lights. That's more than enough and more powerful imaging equipment than most of the masters of cinema from the mid-20th century had at their disposal.1 point -
I remember the Kuro from 2008, it was very nice (ended up getting one of these, and am still using it as a live monitor for the C300 II in the studio): https://www.amazon.com/Sony-Bravia-KDL-52XBR5-52-Inch-1080p/dp/B000WDW6G6. I think the OLEDs have finally caught up (and passed) the top plasmas, but yeah it did take a while!1 point
-
Okay, now the video is officially announced. So here it is. Feel free to comment and give feedback, thanks!1 point
-
I have them both and I actually like GX80 colors more. 50 fps 4K is for me the only reason GH5 is a better choice (and to be honest GH5 looks more like a professional camera, as GX80 looks more like a turist toy). I find V-Log unusable colorwise and I don't see a big difference between 10 bit and 8 bit footage.1 point
-
Thanks @BTM_Pix! I now have CineLike D on my GX85! Would it be possible to unlock or add the ability to shoot in the 4:3 aspect ratio in 24p 4K? The camera only allows 16:9. I can shoot video in 4:3 using the 4K Burst (S/S) mode, but that's 30p. I'd like to use my 2X anamorphic attachment using the 4:3 aspect ratio.1 point
-
Here is a new musicvideo shot for a great artist Nightstop. Don't miss this sleazyness! You can listen the music if you haven't heard of it before: soundcloud.com/nightstop1 point
-
I have 14 studio and it certainly is an upgrade and resolve has been my main editing and grading tool for a while now but I wouldn't want to be without Adobe Creative suite at all. Premier has a clunky layout and certainly needs a revamp but it's integration with Adobe Audition and After Effects means it's got advantages over resolve even with the Fairlight panel. As far as I can see the Fairlight panel at present is just a mixing desk (which is not bad in itself) but it's not yet an audio processing tool like Audition anymore than the FX library in Resolve is any serious challenge to aftereffects if you want to do serious FX and compositing work. No doubt in teh future you will have to purchase extra Audio Plugins for Fairlight to get a complete set of audio processing tools? Any serious filmmaker should have a range of tools at their disposal and I don't see Premier vs Resolve as an either or choice - you need both!1 point
-
1 point