A Flash Developer Resource Site

Results 1 to 19 of 19

Thread: AIR iOS GPU question(s)

  1. #1
    Senior Member joshstrike's Avatar
    Join Date
    Jan 2001
    Location
    Alhama de Granada, España
    Posts
    1,136

    AIR iOS GPU question(s)

    Hi all, been a long time. So glad I have a reason to get back to AS3 for a change!

    I'm workin my first iOS game right now, and have some questions about the setup. Normally on desktop if I need to run animations or grab individual characters out of a sprite sheet, I just throw a bitmap and a mask into a sprite and move the bitmap around. In some cases where I need a lot of instances of the same thing I'll use copyPixels to cache extra copies of the BitmapData for the cells I need, and then throw them into existing or new Bitmaps.

    Fine. My understanding is that all BitmapData instances are automatically uploaded to the GPU on iOS, and pointing a Bitmap to a new BD object takes place completely on the GPU. If that's true, then great. However:

    1. I also understand there is a "practical limit" of 4-6 million pixels for what the GPU will cache. Caching all the sprites in this game as bitmapdata at the load would exceed that. Does the GPU therefore optimize what it wants to cache and what it doesn't, or should I be actively destroying BitmapData that will be reused later? Or is old data just garbage collected from the GPU somehow and uploaded again when necessary? What I don't want is to scan a sprite sheets and run copyPixels 64 times again to load a character in the middle of action. **Edit: Additionally, does the GPU only cache/uncache bitmapdata when it's added/removed from the stage? Or does it literally cache all BitmapData as soon as it's created?

    2. I also came across an older adobe doc with an interesting "gotcha" about using masks in code -- obviously a large Bitmap sprite sheet, placed on a Sprite with .mask set on it won't be uploaded to the GPU (or worse will be uploaded every frame). But this doc (which I can't find, argh) indicated that "if you want to mask a bitmap, put the mask on a separate layer".

    In all my research, I have never heard this mentioned as a way to run sprite sheets in Flash. Everyone always talks about some variation of copyPixels and caching the bitmapdata, then either copying that or pointing to it. However this chimes with something I know from doing a lot of work in Scaleform (with AS2) which similarly leverages the GPU. In Scaleform prior to AS3, setting code masks on movieclips just doesn't work. However, if you place your bitmap or vector on the bottom layer of a 2-layer movieclip, with a fixed vector mask on the top layer, it does indeed work. The only drawback (in scaleform, anyway) is that all masks you do that way end up combined as a single mask on the stage that affects all objects masked that way. However I'm thinking that might not be the case on the iphone in AS3. Has anyone tried this method? It seems like it would be a great reduction in initial overhead to just keep the spritesheet PNGs intact as a single bitmap, and move them around under a fixed mask on a separate layer, if in fact that would allow the whole sprite sheet to cache as one to the GPU and be clipped. Of course it obviously doesn't answer the 6M pixel limit question... but... any thoughts?
    Last edited by joshstrike; 04-02-2014 at 01:47 PM.
    The Strike Agency
    http://www.theStrikeAgency.com

    StrikeSapphire -- The Original Bitcoin Casino
    https://strikesapphire.com
    (not available in the US)

  2. #2
    Senior Member joshstrike's Avatar
    Join Date
    Jan 2001
    Location
    Alhama de Granada, España
    Posts
    1,136
    One additional question, interesting:

    If it's true that all BitmapData is automatically cached to the GPU, then does clearing or setting width/height to zero on that data immediately uncache it? If so, and if we're up against this hard pixel limit... would it make sense to make a hybrid class that extends BitmapData but also includes a ByteArray, where when we want to uncache something we just dump the bitmap (i.e., asynchronously tell the GPU to uncache it) and then refill that data from the ByteArray when it's called up? Would that overcome the limit and allow us to control what's uploaded, what's cleared and when from the GPU memory?
    The Strike Agency
    http://www.theStrikeAgency.com

    StrikeSapphire -- The Original Bitcoin Casino
    https://strikesapphire.com
    (not available in the US)

  3. #3
    Senior Member
    Join Date
    Nov 2001
    Posts
    1,145
    Wow. Just got logged out while responding and lost 15 minutes.

  4. #4
    Senior Member
    Join Date
    Nov 2001
    Posts
    1,145
    It doesn't sound like you're using Scout? You have to use Scout. It will tell you everything....everything.

    Make sure the bitmap dimensions are smaller than the screen dimensions of the device (monitor). It's usually the case that the memory limits you're talking about are set just over the screen dimensions (makes sense). So cut your images into tiles smaller than the screen and they'll tween with no fps hit.

    Air is far better than early days. The devices are far stronger too.

    Overall, do everything like you would normally in flash, create clips, subclips, masks, drop shadows, text, draw boxes, whatever, but just turn all your clips into bitmaps before showing them. So use everything in Flash to create the clip, just turn the clip into bitmap before you put it on the screen (concept you know well).

  5. #5
    Senior Member joshstrike's Avatar
    Join Date
    Jan 2001
    Location
    Alhama de Granada, España
    Posts
    1,136
    Well... it's not the still images, like tiles, that I'm really worried about... it's trying to playback an animation from a spritesheet by drawing them really fast into a single Bitmap object. Combined, they are way more than screen size. So instead of making them all BitmapData at the same time, I want to cache them in a sane way and then force them onto the GPU when I need them. I thought maybe putting them in ByteArrays and then dumping that to existing BitmapDatas, and then pointing the Bitmap to those bitmapdatas every frame would maybe give me more control over that.

    What seems to be happening is that AIR is creating a texture for every BitmapData in GPU mode. I'm just trying to get a handle on how to garbage collect those or force it to create them on demand, rather than creating all of them at the start of the game...

    as it is, I might end up just using Starling. I sort of hate it but maybe it makes more sense than trying to control things this way.
    The Strike Agency
    http://www.theStrikeAgency.com

    StrikeSapphire -- The Original Bitcoin Casino
    https://strikesapphire.com
    (not available in the US)

  6. #6
    Senior Member
    Join Date
    Nov 2001
    Posts
    1,145
    Have each individual bitmap be smaller than the screen size - not the total. The device has to at least be able to handle a bitmap as large as its display as far as I can tell.

    So for 2048x1536 screen, the max size (total pixels) of the bitmap will be bigger than that. So you can make tiles 1600x900 (photo, text, whatever), stack 10 of them inside of a clip and tween that clip smoothly without dropping below 60fps on that device. So you're tweening a movieclip (bitmaps only) that is 10000 pixels high.

    You can put a thousand copies of the bitmap on the stage without any additional load for the bitmap (approximately). You load and trash bitmaps in your pool as needed during loading scenes or times when the fps hit doesn't matter.

    So yes, you can use sprite sheets. You can load a sprite sheet into an array. Then, instead of blitting or any other unique method, you can use regular methods. For example, create a clip called characterWalking, loop through the array and add the bitmaps to characterWalking and simply use visible property and a timer.

    Most games people made with Flash can easily run at 60 fps on most mobile devices with that basic bitmap strategy.
    Last edited by moot; 04-03-2014 at 04:08 PM.

  7. #7
    Senior Member joshstrike's Avatar
    Join Date
    Jan 2001
    Location
    Alhama de Granada, España
    Posts
    1,136
    Thank you. I get this... my question really is, does trashing the bitmap really take it off the GPU memory? Don't you have to trash the bitmapdata? If so, would it be faster to store it in a ByteArray and bring it back than to copyPixels again when you need it?
    The Strike Agency
    http://www.theStrikeAgency.com

    StrikeSapphire -- The Original Bitcoin Casino
    https://strikesapphire.com
    (not available in the US)

  8. #8
    Senior Member
    Join Date
    Nov 2001
    Posts
    1,145
    Yes, the bitmap and the bitmapdata are two seperate things. Store both in arrays and trash both so you can removeChild(), dispose() the bitmapdata, and null all other references.

    I don't use bytearray so maybe I'm missing something... Isn't a bytearray a copy of a bmp or jpg? If you have the bmp or jpg, just copy it whenever you need it. It sounds like you're making an extra copy.

  9. #9
    Senior Member joshstrike's Avatar
    Join Date
    Jan 2001
    Location
    Alhama de Granada, España
    Posts
    1,136
    ByteArray is just raw data, but there is a function that lets BitmapData.getPixels() or .setPixels() to or from a ByteArray.

    So...what I'm thinking is... let's say you know that you want to have up to 100 Bitmaps on screen at a time, and there are maybe 1000 actual art assets (from sprite sheets) but you don't need them all at once, just 100 at a time. 1000 would be too many to cache on the GPU at once.

    But what if I make 100 Bitmaps of a size 0x0 and put them on stage. So far, no GPU use. Then you run through all your art assets and copyPixels() to turn them into bitmapData. Later on I will just look for an empty Bitmap and set its .bitmapData to one of these things, or set it back to 0x0 when I don't need it (never removing from stage). But my question is, is the GPU caching is actually taking place when I create those BitmapData objects? And if so, is it certain that sending them to a ByteArray and disposing them is actually removing them from the GPU...?

    Basically I want to know how and when AIR is creating and uploading textures to the stage3d context, when it never gives you access to that context...
    The Strike Agency
    http://www.theStrikeAgency.com

    StrikeSapphire -- The Original Bitcoin Casino
    https://strikesapphire.com
    (not available in the US)

  10. #10
    Senior Member
    Join Date
    Nov 2001
    Posts
    1,145
    Bytearray is in the ram like everything else. It looks like bytearray is used when you want to manipulate the image - convert it - in the flash.

    Garbage collection only happens to ram. So if you just stop pointing to something in ram, it will get collected. You have to delete everything from gpu yourself - no automatic garbage collection.

    If you're really stuck behind the processor's abilities, look into ATF. It's a way to convert all image types to one type that goes on gpu by all three ios's.
    Here's an article: http://www.bytearray.org/?paged=18

    You have to get into generating the images IN flash. You should only be using textures and those textures should be highly optimized. Sprite sheets of animations is paper technology.
    Last edited by moot; 04-04-2014 at 10:23 AM.

  11. #11
    Senior Member joshstrike's Avatar
    Join Date
    Jan 2001
    Location
    Alhama de Granada, España
    Posts
    1,136
    This is a comic book based tiler for mobile - sprite sheets are the only way to go. Obviously any art that can be generated in code, is. I'm not a n00b. But I'm not talking about (or worried about) storing loaded images in RAM -- that is something easily managed. I'm talking about storing them and pulling them back from normal RAM into the GPU's RAM, which is much, much smaller. Does that make sense?
    The Strike Agency
    http://www.theStrikeAgency.com

    StrikeSapphire -- The Original Bitcoin Casino
    https://strikesapphire.com
    (not available in the US)

  12. #12
    Senior Member
    Join Date
    Nov 2001
    Posts
    1,145
    Yep, I'm clear on the issue. I spent weeks blitting on ipad 1 a couple years ago.

    I wasn't saying you don't understand anything, I was saying that if you understand how much gpu there is on the bottom of your target devices and the sprite sheets you're using are nowhere near it, you have to step back and re-evaluate what you're trying to do.

    You're talking about showing static images?

    Or are you talking about the classic animation/game issues of having too many sprites that you MUST use in the next scene?

    Dude, you have to answer this question: Do you use Scout?

  13. #13
    Senior Member joshstrike's Avatar
    Join Date
    Jan 2001
    Location
    Alhama de Granada, España
    Posts
    1,136
    Quote Originally Posted by moot View Post
    Yep, I'm clear on the issue. I spent weeks blitting on ipad 1 a couple years ago.

    I wasn't saying you don't understand anything, I was saying that if you understand how much gpu there is on the bottom of your target devices and the sprite sheets you're using are nowhere near it, you have to step back and re-evaluate what you're trying to do.

    You're talking about showing static images?

    Or are you talking about the classic animation/game issues of having too many sprites that you MUST use in the next scene?

    Dude, you have to answer this question: Do you use Scout?
    When you run copyPixels() or draw(), it blocks the main thread which causes all animation to stop. I don't want to have to do that every time I pull a PNG file from memory. The PNG is going to be in memory no matter what.

    But when you have turned that graphic into a BitmapData, I think the Adobe docs are saying that you have just actually sent it to the GPU even if it's not being placed on the display chain. So I want a way to send it to the GPU and remove it from the GPU, in hardware accelerated mode, without having to call draw() every time I want to use it again.

    re: Scout, no, I use a variety of tools to track memory usage. I don't usually code using Adobe tools.
    The Strike Agency
    http://www.theStrikeAgency.com

    StrikeSapphire -- The Original Bitcoin Casino
    https://strikesapphire.com
    (not available in the US)

  14. #14
    Senior Member
    Join Date
    Nov 2001
    Posts
    1,145
    You can't do that. You're limited by the device's ram for storage of images/image data like that and you said the size of your image files is way too big.

    Converting any image to gpu data (general terms for copypixels or draw) takes processor and causes fps to freeze. There is no way around this. The only choice you have is when you're going to do this.

    So you have to generate the images in flash or store the image data on the user's hard drive (include the files in the app or load at runtime off server).



    If you haven't yet, you have to check out some videos on Scout, it tracks everything we're talking about. It's a true programmer's tool, it's great.

  15. #15
    Senior Member joshstrike's Avatar
    Join Date
    Jan 2001
    Location
    Alhama de Granada, España
    Posts
    1,136
    Okay, I still think we're talking about different things... what I mean is like this:

    Code:
    //non-class pseudocode//
    
    //many bitmapdatas
    var animation:Vector.<BitmapData> = new Vector.<BitmapData>();
    //one bitmap
    var character:Bitmap = new Bitmap();
    var buffer:Vector.<Vector.<uint>> = new Vector.<Vetor<uint>>();
    
    loadSheet();
    function loadSheet():void {
    	var l:URLLoader = new URLLoader(this.assetBase+data[pointer].src);
    	l.data = URLLoaderDataFormat.BINARY;
    	l.addEventListener(Event.COMPLETE,this.onLoadSheet,false,0,true);
    }
    function onLoadSheet(evt:Event):void {
            //I think that the whole BitmapData of the loader is now on the GPU. Is that true?
    	var wholeSheetData:BitmapData = Bitmap(LoaderInfo(evt.target).content).bitmapData;
            
            //let's just say there are ten frames to the animation on the sprite sheet, 256px each.
            for (var k:int=0;k<10;k++) {
                 animation.push(new BitmapData());
                 animation[k].copyPixels(wholeSheetData,new Rectangle(k*256,0,256,256),new Point(0,0));
            }
    
            //now we have ten bitmapdatas, one for each frame. So we dump the original sheet.
            //This presumably removes the large sheet from memory:
            wholeSheetData.dispose();
             
            //Now. Let's say at this stage in the game, we're only going to use frames 0-4 and later on we'll want 
            //to use frames 5-9 of that animation:
            for (k=5;k<10;k++) {
                //this makes a copy of the raw data in a way that I think will NOT create a texture on the GPU. 
                //this is only stored in the app's memory:
                buffer.push(animation[k].getVector());
    
                //question#1 is does this remove the associated texture from the GPU, or make the 
                //texture 0x0 immediately? I think it does:
                animation[k].width = animation[k].height = 0;
    
                //later on we will unbuffer things by calling setPixels on these empty bitmaps.
                
                //This is my question: Is this a way to force texture creation/destruction on the GPU?
            }
                 
    }
    function setFrame(f:int):void {
            if (!animation[f].width) {
                  //assume we know we're not going to use frames 0-4 anymore
                  bufferSomething();
                  //obviously I'd want to keep better track of what was where, but just for demonstration:
                  animation[f].setVector(new Rectangle(0,0,256,256), buffer[f-5]);
            }
    
            //no new bitmaps are created, we're just pointing its bitmapData property elsewhere on each frame
            character.bitmapData = animation[f];
    
            //no draw() or copypixels(). Just setVector (which is setPixels, run from a vector rather than a bytearray). This way we can intelligently buffer what's on the GPU without taking up any more system memory than the original sprite sheet itself, and without that whole thing needing to take up texture space on the GPU at one time.
    }
    The Strike Agency
    http://www.theStrikeAgency.com

    StrikeSapphire -- The Original Bitcoin Casino
    https://strikesapphire.com
    (not available in the US)

  16. #16
    Senior Member
    Join Date
    Nov 2001
    Posts
    1,145
    Your code, your comments, are exactly what I've been talking about. We're good.

    Here's an old function I use to convert a clip into a bitmap. You can see it's the same type of bitmap handling, you're just using different methods.

    Code:
    //  mcTest is a regular hand-drawn movieclip on the stage
    private var aBitmaps:Array;
    private var bmpMain:Bitmap;
    
    aBitmaps = [];
    bmpMain = new Bitmap();
    aBitmaps.push(bmpMain);
    bmpMain = convertBMP(mcTest,752,115);
    
    
    private function convertBMP(pClip: MovieClip, pWidth: int, pHeight: int): Bitmap {
        var bd: BitmapData = new BitmapData(pWidth, pHeight, true, 0x00000000);
        bd.draw(pClip, null, null, null, null, true);
        var bmp: Bitmap = new Bitmap(bd, "auto", false);
        bmp.smoothing = true;  // makes flash generated images look better
        pClip = null;
        return bmp;
    }
    You're using copyPixels and getVector. Both of them use processing time like draw. Transferring the data from the cpu to gpu takes approximately the same time whether you're using copyPixels, getVector, draw, or whatever. They all cause a fps hit. When somebody says one is faster, they're talking about a small percent faster, like vector vs. array.

    You're focusing on how you're moving the data to the gpu thinking it's different, it's not. It doesn't matter which function you use, you can't avoid that hiccup when you transfer the data.

    Do you think you're going to have an issue or are you just trying to figure everything out? You can tween 100s of unique 256x256 bitmaps with no problem. What is your estimate of max total bitmap tiles and their dimensions you would need?
    Last edited by moot; 04-05-2014 at 02:08 PM.

  17. #17
    Senior Member joshstrike's Avatar
    Join Date
    Jan 2001
    Location
    Alhama de Granada, España
    Posts
    1,136
    ahha... I see... I think you answered my basic question. So draw(), copyPixels() or setPixels() are all uploading to the GPU when you do it, and that's what takes time, not the right? Not the draw() operation itself. So setPixels() is no faster than draw()...?

    If I were working with vector art, then I guess draw() is the only way to go... unfortunately I'm not, because the comic book artist is hand-drawing the whole thing.

    Quote Originally Posted by moot View Post
    Do you think you're going to have an issue or are you just trying to figure everything out? You can tween 100s of unique 256x256 bitmaps with no problem. What is your estimate of max total bitmap tiles and their dimensions you would need?
    I read somewhere that the max amount of bitmapdata should be under 4096x4096 on an iPhone 3S which is the lowest model we're targeting. So that's what I've been worried about. We'll be loading at least fifty 256x256 bitmaps and another two hundred 128x128s over the game, maybe more if we have expansion packs. So I want to avoid hiccups but I also want to not try to throw that all on the GPU at one time.
    The Strike Agency
    http://www.theStrikeAgency.com

    StrikeSapphire -- The Original Bitcoin Casino
    https://strikesapphire.com
    (not available in the US)

  18. #18
    Senior Member
    Join Date
    Nov 2001
    Posts
    1,145
    Yes, they're all the same, they just handle different data. One does movieclips, one does bmp files, one does arrays.

    It makes natural sense. Creating the bitmapdata in the gpu is what takes effort (creating something new). Once the bitmapdata is there, it's no problem moving it around, it doesn't change.

    The device sets the maximum bitmapdata size based on its screen. So set the maximum size of your tiles to smaller than the device's max. Then you can make giant bitmap images out of the tiles. You can tween around a 20000x20000 bitmap on an ipad 3 at 60fps as long as it's built with tiles under the max size. If you try use one 20000x20000 bitmap, it will be processed on the cpu, not the gpu and youll get cpu speed.

    You should go ahead and get some real numbers. You're just predicting trouble. You have to use bitmapdata pooling, so there's no reason not to build it.

  19. #19
    Senior Member joshstrike's Avatar
    Join Date
    Jan 2001
    Location
    Alhama de Granada, España
    Posts
    1,136
    Thanks for taking the time to help. I know I'm going to have to just try it (I actually don't own an iphone, so, that's the first step). I just wanted to know what I was dealing with so I didn't start this off on the wrong footing.
    Much appreciated.
    The Strike Agency
    http://www.theStrikeAgency.com

    StrikeSapphire -- The Original Bitcoin Casino
    https://strikesapphire.com
    (not available in the US)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  




Click Here to Expand Forum to Full Width

HTML5 Development Center