appnext

Selasa, 03 Juli 2018

Android P features you'll love: A better camera experience

Android P provides built-in support for multiple cameras and ways to help developers take high-quality photos in a jiffy.

Android P is going to make it easier to support almost any camera configuration with things like the new Multi-Camera API.

Android comes in all shapes and sizes. It's one of the few consumer operating systems that allows a company to tailor the experience to their hardware platform and that's a big reason it's become so popular — a company that can do something different or better than the competition is free just to do it. You'll see that being taken advantage of in numerous ways once you start looking at the vast assortment of Android-powered phones, but few if these details are as evident as it is with the camera.

Your phone probably has two or three actual physical cameras with lenses and all. But it could have four, or even just one, because like we mentioned Android lets a company that makes phones do things its own way when it comes to hardware features. But that doesn't mean it was easy — engineers and developers have had to work hard to support their own configuration for cameras. While there will still be a lot of work involved to support different camera configurations, Google has addressed some of the tough details with Android P.

Multi-camera API

You might have noticed that some phones, like the Samsung Galaxy S9, use two cameras on the rear of the phone while others only have one. This isn't just for looks or because three cameras are one better than two cameras and the second lens is there to collect information the other isn't collecting while you take a photo.

Wide-angle photos aside (there is no way a computer algorithm can recreate them), you can do everything as good or better with one lens than you can with two but it's not easy. Google uses a single rear camera and a laser diode in the Pixel 2 to capture excellent photos with a single lens, but it is also using incredibly powerful ML (machine learning) algorithms that "know" what the objects you see in a photo are supposed to look like. The software can then adjust the photo, so the things look the way the algorithms, and hopefully, our eyes, think they should.

Not every company making smartphones has the resources to build out proper real-time support for multiple cameras. Now Google is doing it for them.

Samsung doesn't have access to ML algorithms like this, at least not ones it is satisfied with using. What Samsung does have is a team of crack hardware engineers who can solve almost any problem and the software team who can make the hardware work as it should. The Galaxy Note 8 (and other high-end models) uses two cameras on the rear of the phone to do things like measure distance and adjust focus and there is no denying that it does an equally excellent job. This is because Samsung has the resources to tackle the issue of supporting something like Portrait Mode photos in their own way.

Not every company making Android phones has the resources to use two or more cameras at the same time to gather data and pack it all into one photo, so Google is making it easier with Android P's new Multi-camera API.

In Android P developers will be able to gather image data from two or more cameras simultaneously. That means a phone with two rear or two front cameras could combine image data from each in real time and create photos that use seamless zooming, bokeh, stereo vision or almost anything else a developer can dream of doing with two different streams of image data. Developers can also grab data from a "logical" camera that switches between one or more cameras while in use.

These ideas aren't new but native Android support is and that's a big deal.

This means a third "virtual" camera could be created that grabs image data using one or both rear cameras. An application can grab "normal" data through one camera, distance data for a seamless zoom through a second, and switch the original camera back and forth to form a virtual stream in order to process something like a photo filter on the background. This switching would be done so quickly the original image data should be unchanged.

You could have a zoomed in photo through a telephoto lens that is also able to use hardware to create a black and white background complete with digital bokeh. You probably shouldn't ever do such a thing, but if a developer wanted to offer it he or she could.

These changes are not "new" ideas. They are very similar to what companies like Samsung and LG have done with phones that use more than one camera to capture a photo. The difference is that Samsung and LG had to do it because it was something not ever done before. Google adding this support directly into Android means better photos from manufacturers that don't have the same resources Samsung or LG has.

The Multi-camera API will also support monochrome (think black and white) cameras. If the cameras are capable they will be supported fully just like the main high-resolution camera on a phone.

Even more goodies

The Multi-camera API in Android P will get all the attention and will make for the bigger impact, but there are a few other important changes coming to the camera with Android P.

Session parameters are a way developers can have their app grab a picture without it taking forever to process, even if they are leveraging the new Multi-camera API. Functions like "Session_Regular" and "Session_High-Speed" let a developer decide how much of a phone's limited resource pool can be used to grab a photo quickly when it needs to be done and not so quick when it doesn't.

Surface sharing will let applications "handle various use-cases without the need to stop and start camera streaming." This means an app doesn't have to stop working on getting image data it sees through the lens(es) while you decide what to do with the previous photo. That's important when you think of things like the short clips we think of as live photos.

Smaller changes can have an impact, too. Android P has several important ones.

Other even smaller changes are an API to allow the screen to act as a flash by blinking white times as a regular camera flash instead of a developer needing to code that themselves, as well as access to OIS (optical image stabilization) timestamps for application-level special effects — any app can stabilize a photo as well as the built-in camera app can with this change.

Last but not least, proper support for external USB cameras is coming so things like inspection cameras, microscopes or even telescopes can be used through your phone's USB port without a lot of developer work writing a driver but with more features than a basic "USB webcam" interface that works in some cases today.

None of these changes will make us a better photographer, but it will make our cameras good enough that we can take better photos. In the end, that's all that counts, right?

Tidak ada komentar:

Posting Komentar