Click to See Complete Forum and Search --> : [Resolved] [Resolved] [Resolved] [Resolved] [Resolved] [Resolved] [Resolved] [Resolved] [Resolved] [

07-06-2001, 02:47 PM
So I'm not sure if anyone else has posted anything else like this, but I thought I'd throw it out there.... please remember - I'm not a mathmatician nor a statician (I guess I might not even be a speller :) )

With that in mind - here ya go...

I did some testing (not much) with the 'load' method of the XML object. I started out with the idea that using attributes to describe the data would be faster in Flash's parser than using elements to describe the data. Both the Large and Small tags were repeated either 1 time, 10 times, 100 times, 1,000 times, or 10,000 times.

Test Machine:
AMDK6-2 450MHz
300 MB RAM

< top > < inside1 > a < / inside1 > < inside2 > b < / inside2 > < inside3 > c < / inside3 > < / top >

< top inside1="a" inside2="b" inside3="c" / >


1x=.05 seconds (71 bytes)
10x=.32 seconds (710 bytes)
100x=2.68 seconds (7 KB)
1,000x=49.65 seconds (70 KB) ** Flash gave the error that a script was causing it to run slowly and asked if I wanted to abort - I chose NOT to abort. I let it finish. This effected the total time, but that's kinda the idea.. to see how long it takes to get to the end... **


1x=.05 seconds (42 bytes)
10x=.06 seconds (420 bytes)
100x=.31 seconds (5 KB)
1,000x=3.27 seconds (42 KB) 60% file 6.5% of the time
10,000x=17.24 seconds (165 KB) 6.034 seconds for equiv file size on Large1,000x

1,000xLarge and 10,000xSmall:

A file that was 2.3 times the size, took 65% less time to load. Large had 4,000 'nodes' - Small had 10,000 'nodes'. 2.5 times as many nodes.

To describe the same amount of data: (different amount of nodes - Large=4,000 Small=1,000)

1,000xLarge and 1,000xSmall:

70 KB vs 42 KB
49.65 vs 3.27

60% the size 6.5% the time.

Taking this data and going both backward and forward we can 'assume' that equivalent file sizes will produce between:

70 KB times 2.357 = 164.99 KB @ 17.24 seconds (Small)
49.65 times 2.357 = 164.99 KB @ 117.025 seconds (Large) (to load the same size file)
So.... one boundary for the 'same file size' could be - 85.268% savings on time to load. (Taken off of making the actual file size equivalent.)


1,000 nodes times 4 = 4,000 nodes @ 49.65 seconds (Large)
3.27 times 4 = 4000 nodes @ 13.08 seconds (Small) (to load the same number of nodes)
So.... one boundary for the 'same file size' could be - 73.655% savings on time to load. (Taken off of making the number of nodes in the file equivalent.)

Well... that's it... yeah I know.. it's sad that this is what I find to do when I want to have fun... ;) Oh well....

... and if anybody else has testing thy've done - post it here. Maybe we can try to gather some kind of 'Efficiency' polls.... try to figure out what the best way to do Flash/XML is...

My contribution: An XML document using attributes to describe data is loaded quicker than an XML document using elements to describe data.

07-07-2001, 12:22 AM
I thought attributes loaded quicker, but I just put it down to my imagination. If you think about it, each node is a new XML object, which means that there is a lot more objects created.

Also I find that attributes are easier to code for in a lot of cases. I use nodes where I need the data repeated any number of times, and I use attributes when there is only one possible value.



07-07-2001, 03:27 PM
Even going with the idea that each node is a 'new object' you can't explain why 10,000 nodes load faster than 4,000 nodes.... :)

I also think that there is some kind of relationship (detrimental to time) between the number of children a node has. The 10,000 nodes had no children - while the 4,000 node were split up - each main 1,000 nodes had 3 children each... for 4,000 nodes. So there has to be some relationship there.... I wonder if 2,000 nodes with 2 children each would load faster than the 1,000/3 but still slower than the 10,000/0...

I don't know - maybe I'm all wrong.

07-21-2001, 02:52 PM
Hey guys - we need others to run similar tests and post DESCRIPTIVE results and processes.... we need to get a better idea of how to deal with this...

09-18-2001, 04:15 AM
Well how about extending your test FLA with a output too the consol (trace) and the distribute your test Fla with your XML-Files so others can run the test and copy paste there results to the thread!?

09-18-2001, 01:50 PM
Will do. Give me a week...

10-28-2001, 08:14 PM
So... I know it's been a LOT longer than a week... but I was going through my old posts.. and noticed several that I hadn't finished.

Well... I sat down and finished a very simple app. It's all of the Open Source variety.

Here's what I think would be good to have happen::

1. Unzip it all to some directory.
2. Run either of the Standalone EXEs.

There are comments in the FLA about what's going on, but, the first four buttons run a time test on loading the 'largeXML' format, and 5-9 run time tests on loading the 'smallXML' format. I was planing later on to add functionality so that the 10th button could test a file of the end users choosing - but I haven't yet.

So... How about we adopt a standard convention for these tests?

1. Close all Apps that might be running, either in the foreground or in the background, if you can.
2. Each 'button/test' must be run 10 times in succession.
3. Take those 10 times and find the average for that 'button/test'.

When you post your results, please do so as an addition to this thread, and please use this format::

CPU: speed, type
RAM: size, type
HD: capacity, spin rate, transfer rate, type
Test1: avg. time
Test2: avg. time
Test3: avg. time
Test4: avg. time
Test5: avg. time
Test6: avg. time
Test7: avg. time
Test8: avg. time
Test9: avg. time
Test10: avg. time

So.... for my computer it would look like this:

CPU: 450MHz, AMDK6-2
HD: 20GB, 7200rpm, ATA66, IDE
Test1: blah
Test2: blah
Test3: blah
Test4: blah
Test5: blah
Test6: blah
Test7: blah
Test8: blah
Test9: blah
Test10: blah

For my friends computer it would look like this:

HD: 80GB/(2x)40GB, 7200rpm, ATA100, RAID-0 (Striped)
Test1: blah
Test2: blah
Test3: blah
Test4: blah
Test5: blah
Test6: blah
Test7: blah
Test8: blah
Test9: blah
Test10: blah

Does that make sense???

I'm not sure, but I think that's all we really need to know about hardware... if anybody else has suggestions - let us know.

OH - and the link to DL the Zip file::


Test away!

10-31-2001, 06:08 AM
OK, I didn't read Vaykents post real well about what I was or was not suppose to do as I did the testing offline. I found a bunch of typical Macs as I figured there would be a heap of people able to post about there highend machines.

I also tested in a browser and running as an app. The times are seperated by spaces. The 4th test ussually timed out and seemed to crash the flash player probably not enough memory.
This didn't occur in OS-X as Flash runs native and therefore has virtual memory.

All times in MilliSeconds

Mac G3 Desktop 233mhz
768mb of Ram 40gb HD 7200RPM
Three Tests: OS9.0 As App, IE5.0 (latest Patch) OS9.0, MacOSX.1 IE5.1

Test 1: 50, 117, 100
Test 2: 200, 367, 433
Test 3: 2083, 12266, 3833
Test 4: Aborted
Test 5: 33, 84, 50
Test 6: 66, 117, 100
Test 7: 283, 433, 483
Test 8: 2450, 433, 483
Test 9: Aborted

iMac DV 500mhz G3
330mb of Ram, 20gb HD 5200RPM
Three Tests: OS9.0 App, IE5.1 OS X.1, OSX.1 App (running in classic)

Test 1: 33, 50, 50
Test 2: 133, 250, 133
Test 3: 1050, 1817, 1066
Test 4: Aborted, 25533, Abort
Test 5: 83, 50, 16
Test 6: 83, 50, 33
Test 7: 166, 250,150
Test 8: 1183, 2133, 1283
Test 9: Aborted

Interesting results with tests 5,6&7 on the iMac as this is running in classic mode which is basically an emulator of first tests and you would expect it to be *much* slower. I did it purely out of interest. I was expecting the results to be a quarter of the intitial results from running classic straight. Probably a sign of the 'optimisations' that Apple did on OS-X.1 in classic mode.



10-31-2001, 02:24 PM
Sweet. Thanks a bunch. I don't have a Mac on hand to test on... I'm hopin' a lot of other people will download this and try it on their machines as well...

05-03-2002, 08:30 AM

I got some note on the 10000 node version being faster than a 4000 node version. I think it just depends on the parsing algorithm. I guess it's the recursion that's slowing the thing down. While parsing the non-attribute version, the parser has to keep track of a node stack, because he will repeatedly be parsing child nodes (though the stack will never contain more than 1 node, the top-level one). In de attribute-version, no recursion is present, since there are no child nodes, so the node stack will always be empty. Now, without recursion and without the need of pushing things on and popping things of a stack, the parsing might deliver it's work a whole lot faster... appearantly.

I hope I make any sense...



05-03-2002, 11:55 AM
So... you're saying that you have a parser that runs faster for certain types of structures??

Great! - Could we see some code?

05-03-2002, 06:34 PM
I don't think he has code for a different parser just telling you how the Flash parser works.

BTW what are the number for Flash MX running these tests?



05-03-2002, 06:39 PM
The bare parser may perform equally in all parsing contexts, but it's the attached code (the code that gets executed when a rule is matched) that may slow it down. So please don't be sarcastic, I'm only trying to get my finger at the problem. Anyway, it doesn't really matter. We know that nodes are slower than attributes, so I'll shut my mouth right here.


05-03-2002, 08:11 PM
This is the code that is used for the parsing:

function go(xmlDocumentURL){
_root.xmlHolder = new XML();
_root.xmlHolder.onLoad = stopTimer;

function stopTimer(){
_root.loadTime = getTimer() - _root.startTime;

As you can see this is purely testing the XML loading, it is not actually doing any parsing in Flash.

05-06-2002, 03:36 AM
Maybe I express myself somewhat unclear, but I did not mean that you do the parsing yourself, but I was talking about Flash's parsing algorithm. I just took a guess at how Flash might parse XML (as I have implemented a very simple XML parser once in C++). And from the documentation you get to know that "the XML data is not parsed until it is completely downloaded", so the load method also parses.
I haven't tested all examples yet on Flash MX, but I guess that must be somewhat faster, as MM claims to have optimized the XML parser.


05-06-2002, 11:46 AM
MM cliams to have optimised... yeah. I'd say so!

The difference from 5 to MX is the same as the difference from 'not loading' to 'finished in 1/10th of a second'.

Yeah... I'd say they optimised! ;)

05-06-2002, 07:23 PM
I know the Flash-MX is fast, but what are the number for Flash MX running these tests?



05-09-2002, 11:34 AM

I ran the tests on my computer and here are the results and boy can you see the difference between flash 5 vs MX.

FlashMX is the first number, the numbers in brackets are the results of the flash 5 exe that was included in the download.

CPU: 1.4 Ghz AMD Thunderbird
RAM: 512MB, 266DDR RAM
HD: 40GB, 7200rpm, ATA100, IDE
Test1: .8 (19.8)
Test2: 1 (59.1)
Test3: 5.5 (425.7)
Test4: 64.6 (8700.4)
Test5: .8 (15.9)
Test6: .9 (20.3)
Test7: 2.8 (62.1)
Test8: 23.9 (635)
Test9: 557.3 (15480.2)

Test #9 came up with the "script running slowly" error three times out of 10 with flash 5.


05-09-2002, 04:31 PM
I knew it was faster, 20~30 times is impressive. From what I hear a lot of the string handling functions have been written as native code, so that is probably where they got there speed.

Will put some tests up from my Mac in the next couple of days.