So I'm not sure if anyone else has posted anything else like this, but I thought I'd throw it out there.... please remember - I'm not a mathmatician nor a statician (I guess I might not even be a speller )
With that in mind - here ya go...
I did some testing (not much) with the 'load' method of the XML object. I started out with the idea that using attributes to describe the data would be faster in Flash's parser than using elements to describe the data. Both the Large and Small tags were repeated either 1 time, 10 times, 100 times, 1,000 times, or 10,000 times.
300 MB RAM
< top > < inside1 > a < / inside1 > < inside2 > b < / inside2 > < inside3 > c < / inside3 > < / top >
< top inside1="a" inside2="b" inside3="c" / >
1x=.05 seconds (71 bytes)
10x=.32 seconds (710 bytes)
100x=2.68 seconds (7 KB)
1,000x=49.65 seconds (70 KB) ** Flash gave the error that a script was causing it to run slowly and asked if I wanted to abort - I chose NOT to abort. I let it finish. This effected the total time, but that's kinda the idea.. to see how long it takes to get to the end... **
1x=.05 seconds (42 bytes)
10x=.06 seconds (420 bytes)
100x=.31 seconds (5 KB)
1,000x=3.27 seconds (42 KB) 60% file 6.5% of the time
10,000x=17.24 seconds (165 KB) 6.034 seconds for equiv file size on Large1,000x
1,000xLarge and 10,000xSmall:
A file that was 2.3 times the size, took 65% less time to load. Large had 4,000 'nodes' - Small had 10,000 'nodes'. 2.5 times as many nodes.
To describe the same amount of data: (different amount of nodes - Large=4,000 Small=1,000)
1,000xLarge and 1,000xSmall:
70 KB vs 42 KB
49.65 vs 3.27
60% the size 6.5% the time.
Taking this data and going both backward and forward we can 'assume' that equivalent file sizes will produce between:
70 KB times 2.357 = 164.99 KB @ 17.24 seconds (Small)
49.65 times 2.357 = 164.99 KB @ 117.025 seconds (Large) (to load the same size file)
So.... one boundary for the 'same file size' could be - 85.268% savings on time to load. (Taken off of making the actual file size equivalent.)
1,000 nodes times 4 = 4,000 nodes @ 49.65 seconds (Large)
3.27 times 4 = 4000 nodes @ 13.08 seconds (Small) (to load the same number of nodes)
So.... one boundary for the 'same file size' could be - 73.655% savings on time to load. (Taken off of making the number of nodes in the file equivalent.)
Well... that's it... yeah I know.. it's sad that this is what I find to do when I want to have fun... Oh well....
... and if anybody else has testing thy've done - post it here. Maybe we can try to gather some kind of 'Efficiency' polls.... try to figure out what the best way to do Flash/XML is...
My contribution: An XML document using attributes to describe data is loaded quicker than an XML document using elements to describe data.
Even going with the idea that each node is a 'new object' you can't explain why 10,000 nodes load faster than 4,000 nodes....
I also think that there is some kind of relationship (detrimental to time) between the number of children a node has. The 10,000 nodes had no children - while the 4,000 node were split up - each main 1,000 nodes had 3 children each... for 4,000 nodes. So there has to be some relationship there.... I wonder if 2,000 nodes with 2 children each would load faster than the 1,000/3 but still slower than the 10,000/0...
So... I know it's been a LOT longer than a week... but I was going through my old posts.. and noticed several that I hadn't finished.
Well... I sat down and finished a very simple app. It's all of the Open Source variety.
Here's what I think would be good to have happen::
1. Unzip it all to some directory.
2. Run either of the Standalone EXEs.
There are comments in the FLA about what's going on, but, the first four buttons run a time test on loading the 'largeXML' format, and 5-9 run time tests on loading the 'smallXML' format. I was planing later on to add functionality so that the 10th button could test a file of the end users choosing - but I haven't yet.
So... How about we adopt a standard convention for these tests?
1. Close all Apps that might be running, either in the foreground or in the background, if you can.
2. Each 'button/test' must be run 10 times in succession.
3. Take those 10 times and find the average for that 'button/test'.
When you post your results, please do so as an addition to this thread, and please use this format::
CPU: speed, type
RAM: size, type
HD: capacity, spin rate, transfer rate, type
Test1: avg. time
Test2: avg. time
Test3: avg. time
Test4: avg. time
Test5: avg. time
Test6: avg. time
Test7: avg. time
Test8: avg. time
Test9: avg. time
Test10: avg. time
OK, I didn't read Vaykents post real well about what I was or was not suppose to do as I did the testing offline. I found a bunch of typical Macs as I figured there would be a heap of people able to post about there highend machines.
I also tested in a browser and running as an app. The times are seperated by spaces. The 4th test ussually timed out and seemed to crash the flash player probably not enough memory.
This didn't occur in OS-X as Flash runs native and therefore has virtual memory.
All times in MilliSeconds
Mac G3 Desktop 233mhz
768mb of Ram 40gb HD 7200RPM
Three Tests: OS9.0 As App, IE5.0 (latest Patch) OS9.0, MacOSX.1 IE5.1
Test 1: 50, 117, 100
Test 2: 200, 367, 433
Test 3: 2083, 12266, 3833
Test 4: Aborted
Test 5: 33, 84, 50
Test 6: 66, 117, 100
Test 7: 283, 433, 483
Test 8: 2450, 433, 483
Test 9: Aborted
iMac DV 500mhz G3
330mb of Ram, 20gb HD 5200RPM
Three Tests: OS9.0 App, IE5.1 OS X.1, OSX.1 App (running in classic)
Test 1: 33, 50, 50
Test 2: 133, 250, 133
Test 3: 1050, 1817, 1066
Test 4: Aborted, 25533, Abort
Test 5: 83, 50, 16
Test 6: 83, 50, 33
Test 7: 166, 250,150
Test 8: 1183, 2133, 1283
Test 9: Aborted
Interesting results with tests 5,6&7 on the iMac as this is running in classic mode which is basically an emulator of first tests and you would expect it to be *much* slower. I did it purely out of interest. I was expecting the results to be a quarter of the intitial results from running classic straight. Probably a sign of the 'optimisations' that Apple did on OS-X.1 in classic mode.
I got some note on the 10000 node version being faster than a 4000 node version. I think it just depends on the parsing algorithm. I guess it's the recursion that's slowing the thing down. While parsing the non-attribute version, the parser has to keep track of a node stack, because he will repeatedly be parsing child nodes (though the stack will never contain more than 1 node, the top-level one). In de attribute-version, no recursion is present, since there are no child nodes, so the node stack will always be empty. Now, without recursion and without the need of pushing things on and popping things of a stack, the parsing might deliver it's work a whole lot faster... appearantly.
The bare parser may perform equally in all parsing contexts, but it's the attached code (the code that gets executed when a rule is matched) that may slow it down. So please don't be sarcastic, I'm only trying to get my finger at the problem. Anyway, it doesn't really matter. We know that nodes are slower than attributes, so I'll shut my mouth right here.
Maybe I express myself somewhat unclear, but I did not mean that you do the parsing yourself, but I was talking about Flash's parsing algorithm. I just took a guess at how Flash might parse XML (as I have implemented a very simple XML parser once in C++). And from the documentation you get to know that "the XML data is not parsed until it is completely downloaded", so the load method also parses.
I haven't tested all examples yet on Flash MX, but I guess that must be somewhat faster, as MM claims to have optimized the XML parser.