I can speak to WebGL as we've actually built some stuff on top of it. Yes performance is good, sometimes. But if your user is on hardware like Intel X3100 graphics cards, then the WebGL context will run in software mode, performance will be dismal, and you as a developer don't have any real checks to see that.
Secondly, hardware support for WebGL is very dismal right now. For example, newly shipping MacBook Air's don't support WebGL.
Third, we found so many inconsistencies across different hardware and different browsers that it made it not worth it to work on a WebGL project for the time being, especially for a small team. We wrote a number of runtime checks, but we still could not account for all the bugs, or find ways around every one of them.
If you're writing a new 2D game in HTML5 from scratch, you'll have a much larger market share by going after a slightly simpler (artistically) game using canvas, rather than trying to go for more horsepower with WebGL. However, WebGL can be useful for porting, if you'd like to see an OpenGL game you've already made come to the browser.
How new? My MBA is only 2 or 3 months old and WebGL runs fine (and fast) under Chrome 17. Indeed, under the test on the parent link, I get about 500 objects on Canvas, 6000+ on WebGL.
The latest generation wasn't working in my testing. Hmm, well as of Chrome 15/Firefox 7 they were not supported. Would you try out http://drawwith.me/demo and let me know if that works?
WebGL is disabled by default in Safari on Air (maybe other Macs too) but if you go to Preferences/Advanced and check "Show Develop menu" then you will be able to enable WebGL support through Develop menu. On my Mid-2011 Air it appears pretty fast.
Yeah we found really dismal support for X3100 hardware. It doesn't support the return statement in vertex or fragment shaders due to a driver bug, and it doesn't support gl_PointCoord in the fragment shader. Yet chrome and firefox would allow this hardware to open a WebGL context, with no warning that these basic features of OpenGL are not supported.
Perhaps you see why some of us are skeptical when WebGL's proponents downplay concerns about it enabling security bugs.
The hardware provides a full-featured CPU with DMA access to the host's memory. Everything better go just right, or it's going to end up remotely exploitable.
The hardware provides a full-featured CPU with DMA access to the host's memory.
What part of WebGL gives a programmer unrestricted access to the host's memory?
Everything better go just right, or it's going to end up remotely exploitable.
The same could be said for practically any program or web app. Your web browser could have a buffer overflow in it's HTML parsing engine, yet I don't hear you speaking out against HTML... - http://secunia.com/advisories/12959/
What part of WebGL gives a programmer unrestricted access to the host's memory?
Ever had a video driver bug crash an application or bluescreen Windows?
Your web browser could have a buffer overflow in it's HTML parsing engine, yet I don't hear you speaking out against HTML
It's a fair point. Security is about trade-offs. I am indeed concerned about HTML and do most browsing in a VM. But to me WebGL is farther out on the risk/benefit spectrum than HTML. I would not accept it for ads or ordinary page content. I would consider enabling it selectively for an app that I carefully decided to trust.
* Admittedly I have not tried, but I suspect it's going to be very difficult to get it to work well from within a VM.
* HTML parsers and renderers have indeed had plenty of vulnerabilities. They have taken years to get as secure as they are today and we may not have seen the last of the exploitable bugs in them.
* HTML parsers have been implemented from the beginning to accept untrusted data from the internet. 3D graphics drivers, on the other hand, are designed primarily for performance in a scenario where they are run by a single-user game on the local machine, often with Admin privs already.
* We saw how many years it took Microsoft (the company that could reportedly turn on a dime) to 'get' security to the point that they could ship a secure browser. I don't see much evidence that graphics vendors are even thinking about it yet.
* Apple knows that WebGL bugs will result in jailbroken iPhones. Guess how many years their OpenGL support level is behind the PC...on the very same GPU hardware? Last I checked, someone reported getting OpenGL 3.2 working on a Mac. I've had usable OpenGL 3.3 on a freaking Linux laptop for a couple of years now. More powerful GPUs are up at 4.2.
* A buffer overflow in HTML does not automatically amount to a kernel level compromise, (some recent font handling bugs in Windows notwithstanding :-). In fact, the latest generation of sandboxed browsers are building defense in depth mitigations. Any buffer mismanagement between the GPU, driver, GL, and WebGL seems very likely to result in complete pwnage. The GPU can access host memory directly with no access permissions.
Don't get me wrong - I love the idea of WebGL and want it to succeed. I just would hate to see it end up like Flash, struggling for years to retrofit security onto something that wasn't originally spec'd for it and a bunch of its users getting pwned in the process.
Ever had a video driver bug crash an application or bluescreen Windows?
Yes, but I've also had audio, webcam, scanner & IDE drivers BSOD my computers over the years as well. Drivers usually get better with time, but crappy drivers aren't unique to video cards.
Admittedly I have not tried, but I suspect it's going to be very difficult to get it to work well from within a VM.
VM software is getting better at passing 3D through to the native hardware, so it may be easier than you think.
HTML parsers and renderers have indeed had plenty of vulnerabilities. They have taken years to get as secure as they are today and we may not have seen the last of the exploitable bugs in them.
This is the story of software in general. As long as there will be software, there will be bugs, some of them severe. Doesn't mean you just accept it & not try to prevent them, but you also don't stifle innovation because of the fear of causing them.
3D graphics drivers, on the other hand, are designed primarily for performance in a scenario where they are run by a single-user game on the local machine, often with Admin privs already.
Not really, they've supported multiple 3D accessible contexts for years & years now. Also if you're running Vista or Windows 7 you are more than likely already relying on the 3D hardware to render your 2D desktop. Web browsers are already using features of 3D hardware to further accelerate non-WebGL content.
Apple knows that WebGL bugs will result in jailbroken iPhones.
They also know that they messed up on the antenna design of the iPhone 4. Apple is not immune to flaws.
Guess how many years their OpenGL support level is behind the PC...
This doesn't really matter because WebGL is essentially a copy of OpenGL ES 2.0, which is closely based on OpenGL 2.0.
The GPU can access host memory directly with no access permissions.
So can your CPU, what's stopping rogue apps from doing so? Software. WebGL mainly involves writing GLSL scripts that act on buffers & arrays of data. You are limited to what data types you can upload into the video card.
Perhaps there are more vectors for vulnerability, but I'd rather see WebGL take off as a transparent standard, than rely on Adobe. Plus it's performance could be much greater than sluggish Flash. I admit I'd like to see more granularity as far as control. Last thing I want is WebGL ads eating up my GPU/CPU/Battery rendering stupid ragdolls physics simulations.
Yes, but I've also had audio, webcam, scanner & IDE drivers BSOD my computers over the years as well. Drivers usually get better with time, but crappy drivers aren't unique to video cards.
Absolutely. But webcam, scanner, and disk drivers are not accepting attacker-chosen data from the internet. The audio driver may occasionally. I would be surprised if there had never been any remotely exploitable security vulnerabilities, particularly involving driver support of complex audio compression formats.
VM software is getting better at passing 3D through to the native hardware, so it may be easier than you think.
Yeah, this will be something to watch.
This is the story of software in general. As long as there will be software, there will be bugs, some of them severe. Doesn't mean you just accept it & not try to prevent them, but you also don't stifle innovation because of the fear of causing them.
The world's botnet and antivirus vendors thank you for your commitment innovation.
Not really, they've supported multiple 3D accessible contexts for years & years now.
Right, but to what extent have they been really tested as a security boundary?
Also if you're running Vista or Windows 7 you are more than likely already relying on the 3D hardware to render your 2D desktop. Web browsers are already using features of 3D hardware to further accelerate non-WebGL content.
No, web browsers are not accepting shader code from websites and feeding it to the GPU.
>The GPU can access host memory directly with no access permissions.
So can your CPU, what's stopping rogue apps from doing so? Software.
A lot of software built on very carefully designed hardware memory protections that took decades to mature into the brittle systems we have now.
Out of curiosity, do any GPUs have no-execute page permissions?
And what stops a GPU from modifying the execute-disable bits on CPU-managed memory pages?
WebGL mainly involves writing GLSL scripts that act on buffers & arrays of data. You are limited to what data types you can upload into the video card.
GLSL is a Turing-complete language, so no, it is an unlimited data type. It is provably impossible to put limits on its behavior before you run it. The best you can do is run it with limited access to other memory space. But if that were implemented perfectly, video drivers would never crash.
Perhaps there are more vectors for vulnerability, but I'd rather see WebGL take off as a transparent standard, than rely on Adobe.
Well anything looks good if your standard of comparison is low enough. :-)
I admit I'd like to see more granularity as far as control. Last thing I want is WebGL ads eating up my GPU/CPU/Battery rendering stupid ragdolls physics simulations.
Honestly your best bet is to turn your computer off & unplug it from the Internet. Sandboxes, VMs, BIOSes are all subject to bugs which could lead to security vulnerabilities. Wrapping yourself in pillows may make you think you're safer, but it's not necessarily true & often is too much effort for most normal users. If you do not trust the software you're running then you should not be running it.
If you do not trust the software you're running then you should not be running it.
Yes, I often decline to run software. Even if I do "trust" the software and its author, running it may unnecessarily increase my attack surface.
Honestly your best bet is to turn your computer off & unplug it from the Internet. Sandboxes, VMs, BIOSes are all subject to bugs which could lead to security vulnerabilities. Wrapping yourself in pillows may make you think you're safer, but it's not necessarily true & often is too much effort for most normal users.
That's the same FUD I hear every time I tell someone "that particular feature set is beyond my risk/reward threshhold".
Here's the counter-argument to that:
Would you have your surgery at a hospital where the staff were playing Flash or WebGL Facebook games on the intravenous drug pump management console?
We all have decisions to make about risk vs. benefit. I make mine with different considerations than yours. This is OK.
Would you have your surgery at a hospital where the staff were playing Flash or WebGL Facebook games on the intravenous drug pump management console?
This is quite a bit of FUD in it's own right. Would you use a computer who's only ability was to manage an intravenous pump? Apples vs Oranges. Even with medical devices being highly audited, not running Flash or WebGL & not being on the Internet, they still can end up with life threatening bugs.
> Would you use a computer who's only ability was to manage an intravenous pump?
Yes, I would prefer exactly such a thing for my IV pump.
> Even with medical devices being highly audited, not running Flash or WebGL & not being on the Internet,
There absolutely are many many medical devices running Windows XP embedded plugged in to hospital LANs around the country as we speak.
In fact, I used to work on such a system. It was a total PoS from a security perspective. The mechanical engineers who designed it seriously didn't expect anyone would plug it into the LAN. Maybe they've improved it since then, I don't know. I no longer associate with those people.
Summary: Adobe Flash Player before 10.3.183.11 and 11.x before 11.1.102.55 on Windows, Mac OS X, Linux, and Solaris and before 11.1.102.59 on Android, and Adobe AIR before 3.1.0.4880, allows attackers to execute arbitrary code or cause a denial of service (memory corruption) via unspecified vectors, a different vulnerability than CVE-2011-2445, CVE-2011-2451, CVE-2011-2452, CVE-2011-2453, CVE-2011-2454, CVE-2011-2455, and CVE-2011-2459. CVSS Severity: 10.0 (HIGH) Published: 11/11/2011
So I had to look back only 11 days for my example, and that's not to mention the 15 other Adobe bugs with CVEs published that week.
Still medical devices, like SCADA systems, do usually tend to suck because they're computer systems designed by people who've never secured a computer before. Guess what! Did you know you can kill someone wirelessly if they have a Medtronic insulin pump? http://www.reuters.com/article/2011/10/26/us-medtronic-idUST...
But that doesn't have much to do with my reasons for being wary of allowing WebGL.
Fair enough, but that's not really a very good argument for its inherent security either.
There is a long history of problems with Flash. Silverlight has had vulnerabilities as well.
But those are 3rd party binary plugins. It's a lot easier to disable, uninstall, and move beyond 3rd party plugins than it is widely-adopted standards implemented by the browser vendors themselves.
It's because it is so appealing and has the potential to become quite popular that we're talking about it now.
"Finally it's a shame no mobiles support WebGL yet. iOS actually supports WebGL, but they've disabled it! I'm not sure why they've done this, because enabling WebGL is crucial to high performance HTML5 games on mobiles."
The cynic says: App store! App store! Money money money!
Could be good reasons. Perhaps Apple's webGL implementation actually could be used for exploits, and until it's secure they won't allow it for general use. Notice the one place Apple does allow WebGL is iAds, which are reviewed & controlled by Apple.
And the cynic is an knee-jerk idiot. The App Store is not Apple's money maker by a large margin, selling iPhones is.
Also Apple HAS added WebGL support to iOS, only hasn't enabled it yet. Tons of reasons exist: battery issues, the implementation not being ironed out yet, and the security risk of WebGL.
Apple cares about the experience of using an iPhone, and if WebGL games on mobile safari suck the batter dry, stall or even crash, they aint gonna enable it just for to add one for mark on the iPhone spec sheet.
if the renderer can leave the majority of the frame time free for game logic to run, it's done its job. Consider it this way: Chrome 15 leaves 86% of the frame time free and Classic leaves 97.6% of the frame time free
Well, if you're going to evaluate it that way then surely you also have to account for the game logic running much slower in JavaScript than in C++.
The bottom line is: If you care about graphics rendering, C++ wins by a large margin. And if you care about game logic, C++ wins by an even bigger margin.
You have to put this test in context. It's basically testing how fast you can take floating-point values out of JS objects, put them in vertex buffers, and shuttle them through the JS <-> C foreign-function interface into the GL library. This is pretty much a worst-case scenario for JS, and Opera gets within about 4x of C++.
For more complex rendering, JS will be faster, relatively, because the GPU will have to do more work for each batch of vertices you send over.
For game logic, JS will be faster, relatively, because going through the foreign function interface is actually somewhat slow. You have to pin GC'ed objects onto the stack, etc. Moreover, calling into code the JIT can't see can disable certain optimizations.
If you look at certain FP micro benchmarks, like nbody and spectrum-norm in the Shootout, V8 is within 4x of gcc -O3 on nbody, within 3x on spectral-norm when using gcc's vector extensions, and dead-even with gcc on spectral-norm when not using gcc's vector extensions.
The places where V8 still falls down are property access (the prototype OO system forces some overhead) and lack of vectorization. The former can be addressed by using an SoA layout rather than an AoS layout, and if your code is amenable to vectorization you should break out the assembler anyway.
A good part of game logic is done in a "scripting" language these days. For example Lua is used in WoW & Starcraft 2 and the newer Civilization games use Python. Generally they do use the core C++ engine to all the math heavy tasks like collision detection.
And this obfuscates the point that many web games rely on scripting languages on the server side to handle most tasks. You can't rely on the client to do much game logic without (rightfully) being paranoid about the user themselves interfering with what's in their active memory and that being sent back to your server...
As a side note, if GPUs and CPUs are both that awesome these days, why do side-scrollers still not look perfect? I've been playing Braid on OS X and while it's a pretty game, scrolling is just noticeably blurry (and this is on a pimped-out MacBook Pro Core i5, NVIDIA GeForce GT 330M).
I've just always held out hope that once the hardware is fast enough, side-scrollers would look just as crisp scrolling as they do standing still.
Yep. Ironically, the quality of the Macbook Pro's screen is what's causing the problem. The Pro's IPS screen has much better color reproduction and view angle consistency than the cheaper TN screens you see everywhere. However, a disadvantage of IPS LCDs for games is a relatively slower color switching speed. Hence, the blurriness during motion.
MBPs do not have IPS displays, only the larger iMacs. Look at your screen from a downward angle, you'll see it yellowing, look from above and light colors are inverted.
And interestingly, most newer IPS panels have a gray-to-gray time similar to TN, around 4ms (60fps = 16ms/frame). You can't tell anymore if a display is good or not just by it's specs, each one is different.
I looked it up and you're right! That's a surprise to me. My buddies' pro has significantly better color viewing angle than my macbook. I assumed it was an s-ips or something similar.
Performance is more to do with the framerate than visual quality - if you see blurry artwork it may be driver bugs, bad coding, sprites stretched with the wrong quality, or other things like that.
I wonder whether your huge[1] runtime had any effect on the benchmarks. Wouldn't using pure Javascript have been a better idea?
I also notice your "C++" version used Direct3D. Shouldn't you have used OpenGL with the exact same calls as the JS to get a fair comparison with the WebGL versions?
Under the hood most (Chrome and Firefox) WebGL implementations use D3D9 on Windows through Angle (http://code.google.com/p/angleproject/) anyway. There is of course additional overhead in WebGL implementations, Javascript and using Angle as a translation layer, but this is a fair comparison, eventually those sprites are getting rendered by the same drivers and same D3D runtime.
It's a 61kb script minified, and jQuery 1.7 is currently 91kb. Yes jQuery is likely to be cached but IMO it's not huge, one of the game's images could easily be 61kb too. On top of that we've seen some competing tools generating like half a megabyte of JS :P
Edit: also our intent was to compare real game engines, which necessarily add some performance overhead, rather than a thin experiment.
Hmmm ... the author says that percentage of frame time left for game logic is the important factor, but if raw c++ is around 6 times faster, doesn't that mean you can do 6x more sophisticated game logic in a given amount of free time as well?
That said, the webgl performance in chrome is indeed (relatively) impressive. Am also a macbook air user and have no trouble getting good performance on it.
One point to note is that the webgl engine in chrome runs all graphics in a single process, atleast partly for reasons of security (shader code gets checked by the browser before being set to the gpu, for example). That is sort of like a driver on top of a driver and so there will always be like a 2x performance gap between webgl and c++ i think.
For a given number of triangles/quads, you have a certain amount of time left in the frame period for the game logic. Say 5 ms out of a 17 ms frame (@60 FPS). A faster language like C++ is going to take less time for feeding vertex data to the GPU. So instead of taking 12 ms to push the vertices it may take 2 ms, giving you 15 ms for game logic instead of 5, a 3x increase in time.
The game logic will likely be faster by a similar factor, so you could quite possibly get an 18x increase in game logic capacity.
Why was the development version of Opera used, but not of the other browsers? Both stable and development Opera is shown there, but just stable versions of the other ones.
The development Opera does great, but the development versions of the other browsers are likely also much faster than their stable versions.
It's only included in the last chart, with this explanation: "The Opera 12 alpha comes with hardware acceleration but since it's alpha I don't want to include its results yet - sometimes making software more reliable also means making it a little slower, and considering I used the stable branches for all the other browsers I also thought it would be unfair to include Opera 12 alpha (until the end where I throw it in for fun)."
It seems that Canvas 2D isn't accelerated by Firefox 7 on Linux at all. I'm only able to get about 600 objects, versus ~4200 with WebGL. Chrome's Canvas settles at ~4200-4300 on the same system (WebGL does ~6100).
Very interesting analysis. Still, when comparing to C++, we have to consider that with a game that actually does something (game logic, interaction etc.) C++ advantage would be much bigger.
Actually C++ advantage would be smaller, since the test isn't at all GPU bound. It's basically testing how fast you can copy data through the JS <-> C++ FFI into the GL library.
A test that stayed within JS would optimize better and perform better relative to C++.
I have a feeling they'll announce the Chrome port to Android at Google I/O next year (which is also a Chrome event), and it might get WebGL support along with it. I know some people were asking the devs about WebGL on Android at last I/O, so they might've taken that into consideration for the next I/O.
It would be a good time to do it, too, because otherwise Mozilla and Opera will do it anyway next year.
Secondly, hardware support for WebGL is very dismal right now. For example, newly shipping MacBook Air's don't support WebGL.
Third, we found so many inconsistencies across different hardware and different browsers that it made it not worth it to work on a WebGL project for the time being, especially for a small team. We wrote a number of runtime checks, but we still could not account for all the bugs, or find ways around every one of them.
If you're writing a new 2D game in HTML5 from scratch, you'll have a much larger market share by going after a slightly simpler (artistically) game using canvas, rather than trying to go for more horsepower with WebGL. However, WebGL can be useful for porting, if you'd like to see an OpenGL game you've already made come to the browser.