getting picture of camera screen and augmented image together


 used this call to take picture of screen


and got the picture jpg from this call back function

public void onPictureTaken(Bitmap picture) {

this works however I only get the picture without the augmented reality image on it.   how can i get a picture with the augmented reality image added to it?

Beyondar's picture

Hi kkkkk (weird name btw)

This part is not finished yet (check out this open ticket but to do so use the Utils.takeSnapShot method inside the beyondar package. I'll add more documentation and I'll make it easyer to use in future releases. Thanks for the feedback!

kkkkk's picture

thank you for responing about my Beyondar library question,   I was looking in the code yesterday and found the method you mentioned, takeSnapShot method,  It works on all my devices to create one bitmap from the 2 separate images,  the GLsurfaceView image and the cameraView image.  

i made a method of my mainActivity class to create a jpg image from the bitmap and save it to the external memory, directory /mnt/SDcard/  of the external memory. 

when looking at the produced jpg image there are two problems that i found.

i was able to fix the first problem.. of 90 degrees off,  by using the Matrix class to rotate the cameraView image 90 degrees  so it is now the same orientation as the the GLsurfaceView.

now the next problem is the cameraView size, it is much smaller than the GLsurfaceView so when both imags are placed together there is the large black area on the right and below as described earlier. 

i am having trouble finding the place in the code to make the camea view larger, or to make it the same size as the GL surface view.

the GL surface view image is perfect, but how do i change the camera view to get the same size as GLsurface View?

Beyondar's picture

It is done inside the private class ScreenShootCallback inside Utils. One of the problems that I have is that it is using Bitmaps and if the image is too large it will cause a OutOfMemory error with the big Bitmap. In the new version I wan to do all the operations in a different way in order to create HD pictures, for now a hot fix for this issue is reducing the size of the bitmaps (therefore loosing quality)

kkkkk's picture

i have made adjustments to the ScreenShootCallback method and the mergeBitmaps method inside of the ImageUtils class, and still can not get the two images to be the correct size.  either one gets too big or the other gets too small and can not get both imags to be the same size.    I am calling the takesnapshot method in my main class to take the picture and passing the mBeyondarCameraView and mBeyondarGLSurface to the method.  Getting bitmaps of both of them.   the first problem was solved by using matrix to rotate the cameara view 90 degrees because the image is 90 degress angle different from the surface view. 

so after rotating the camera view 90 degrees it is the same orientation as the surface view was vary easy to fix unlike the size difference problem.

I tried to increase the size of the camera view to get it to the same size as the surface view but the result was that the surfface view looked strange and not the right size.   also tried to reduce the size of the surfaceview to get it the same size and camera view and still no success.   I still have not found a solution to this mystery yet.   Other than that  I will continue working on the problem.

if you have any ideas that would help on how to fix this it would be appreciated. 

kkkkk's picture

I understand that you want to keep the cameraView scaled to a smaller size to avoid the out of memory error for this bitmap size.  That is a good idea,  so I focus on reducing the size of the surfaceView image so it can be the same size as the cameraview image.  How to do that is what I am working on now. 

kkkkk's picture

I am making some changes to the checkResults() method in the screenShootCallback class to see if that can fix the problem with the size difference between the cameaView and surfaceView

kkkkk's picture

wanted to add, what i got from Logcat in eclipse to show the size difference in the current code.   From mergeBitmaps method in the ImageUtils


cameraView = 480  X  640

surfaceView = 1066  X  758

bmpOverlay = 1066 X 758

so i guess the fix looks simple, have to drop the surfaceView and the overlay.    the overlay is the reason all that black empty space exists on the top and side of the image.  The surfaceView is an jpg image and i am guessing a clear background to that image.


kkkkk's picture


i found out with some experimentation that I can reduce the size of the cameraView from 640 X 480 down to a smaller size like for example 300 X 400 and it gets even smaller than it was in relation to the other two images.  the surfaceView and the bitmapOverlay, which are both 1066 X 768.    however i wanted to make the camera view larger and tried 1066 X 768, and i got the runtime error in logcat of;

y + height must be <= bitmap.height()

this occured on line 139, the onPictureTaken method of ScreenShootCallback private inner class of the Utils class.

so the quesion is. where in the code is this size limit located?   there is a limit of 640 X 480 and if i try to make the cameraView larger,  it will get this error.  So there is this limit on how large the size can be.   I looked and could not find in the code anywhere where there is a hardcoded size limit if 640 X 480. 

because of this i decided another possibility is to reduce the sizes of both the other images, (bitmapOverlay and surfaceView).  so one idea is to record the width and height of the Overlay and put that in shared preferences or static variable the size of the cameraView and when the bitmapOverlay and surfaceView are created to make them that same size from the begining. 

does this soud like a good idea?



kkkkk's picture

sorry, correction to last message,

  record the width and height of the CAMERAVIEW and then put that in shared preferences or static variable the size of the cameraView and when the bitmapOverlay and surfaceView are created to make them that same size from the begining. 

Beyondar's picture


Wow!! you did a lot of work! :)

I think that the best way is to change the method mergeBitmaps form ImageUtils:

 public static Bitmap mergeBitmaps(Bitmap bmp1, Bitmap bmp2) {

        int width = Math.max(bmp1.getWidth(), bmp2.getWidth());
        int height = Math.max(bmp1.getHeight(), bmp2.getHeight());
        Bitmap bmOverlay = Bitmap.createBitmap(width, height, bmp1.getConfig());
        Canvas canvas = new Canvas(bmOverlay);
        canvas.drawBitmap(bmp1, new Matrix(), null);
        canvas.drawBitmap(bmp2, 0, 0, null);
        return bmOverlay;

Maybe improving the following lines:

        //The canvas should resize the bitmap according to the canvas size???
        canvas.drawBitmap(bmp1, new Matrix(), null);
        canvas.drawBitmap(bmp2, 0, 0, null);

I would like to see your code: can you fork the project on GitHub, follow this issue thread and show me what you did? I'm gonna fix that part as soon as possible because I think that is a nice feature!!

Thanks again for your feedback

kkkkk's picture

I just forked the project and sent a pull request so you can see the changes in the code that I made.  After the changes I tested this on 3 Android phones and one 2013 Nexus 7 tablet and did not find any problems       I wrote a method in my mainActivity that uses the newly updated code in the framework to take an image and save it on the SD card and it shows the correct size and orientation.

thank you for your response to my questions.  I had to read more of the code yesterday and now I understand it better.