7554 – the điện biên phủ game

game to be published this October, to be precise, the first Vietnamese FPS (first person shooter) 3D game, and you know what it’s about, the famous Battle of Điện Biên Phủ. The game features 4 main characters, all joined the anti – colonial – French movements from the very early days, participated in various operations before arriving at the final confrontation at Điện Biên Phủ. Not yet to be released, but below are some nice preview screenshots. Having some experiences with game engines, I could say that PC 3D games nowadays are not that hard as before.

They are not really about low level graphics techniques, you would rarely even touch to OpenGL (or DirectX), the engines support most elements you would need. To make a game interesting, it is more about storyboard, scene building, animation, physical modeling and game AI. Anyhow, this is an interesting and promising project, since not only it’s the 1st real FPS game made in Vietnam, but also the first game that takes its inspirations and stories from our history!

svg morphing

ector graphics has always been my fascinating topic ever since the time of DOS (and Borland C++ 3.1): path, stroke, fill functions… Never really consider JavaScript as “real programming” 😀, but today, we’ve got many of the 2D, 3D capabilities with this language, and sometime we just need to have our idea implemented quick! The little fun below tries morphing the drawing paths, hence transforming one painting into another. Vector graphics are acquired using AutoTrace, the open source tool that helps converting (tracing) bitmap image into vector form (SVG). Loading SVG and morphing paths are easily done with the Raphaël.js library.


Click on the white arrow button to begin animating, click again to reverse the transformation, move mouse over each path to get it highlighted. Click here to see the artist‘s original paintings and finer SVG tracings!

verlet integration

little fun with gaming physic, written using Three.js, the 3D library built ontop of HTML5 and WebGL. Physics engines like ODE or PhysX support a wide range of simulation: rigid body, soft body objects… While those engines are powerful, they are somewhat weak in scalability, e.g: to simulate the flag below, we need to create an array of hundreds of objects, which may not really feasible on small devices.

This demo tries its own physical calculation called Verlet integration: numerical method to apply Newton’s second law of motion to a system of point masses. Verlet integration, despite its complicated formal description, is simple enough to be calculated and executed in real time on the low end mobile devices.

The demo works best on Chrome but it should work on any browser that supports <canvas> (try dragging the flag to add some motion).

game terrain



little technique to create terrains for use in game. Sometimes, we need to create game spaces with: mountain, plain, valley, river… the environment where our game scenes take place. It could be a labour – intensive task unless we can make use of the existing geographic data. Some games like Flight Simulator have the scenes modelled exactly after the real world.

First, we would need a topographic map, typically the height – contour map used in artillery. Nowadays, the SRTM and ASTER GDEM projects provide these data for free in digital format. They are satellite (radar) images of the earth surface with resolution upto 90m for SRTM and 30m for ASTER GDEM. I would recommend the ASTER GDEM for its higher resolution. Go to the site, follow its complicated registration process to download the data via NASA’s ftp server.

The downloaded data would be Zip packages with height info in GeoTIFF format. Next, we would use this MicroDem utility to convert it into PGM, an intermediate format that can be read by Photoshop (or GIMP). Then we save it as heightmap (PNG or TIFF format) which can be imported into most game editors. By now, the terrains are recreated exactly as they appear in our real geographical world. Just apply textures, lighting, water and other stuffs then to make it look good!

Images on the left: 1. topographic data as rendered by MicroDem, 2. heightmap (grayscale image) as viewed by Photoshop, 3. the terrain created and roughly decorated (a low – poly version), here is the Sơn Trà peninsula (or les vallées de mon coeur 😀), the shape, the silhuette that’s long been in my heart!

unity




Screenshots of the new Unity desktop. There’s certainly a harsh non – exclusive competing between Unity and GNOME Shell. A proxy war between Ubuntu and Fedora Core, which is in turn a proxy war between Canonical and Red Hat.

Eventually, it’s user experiences that would decide which desktop is best (and best for what), and that’s still a long way to come. But technically, Unity is the first huge bold break from tradition, whose real goal deep inside is replacing completely the age – old heritage of X window system by an OpenGL – based one, a problem partially addressed in my previous post.

am trying Ubuntu’s new desktop introduced lately with Natty Narwhal (11.04). After heavy development phases, Unity has reached its alpha stage, a massive move replacing the GNOME desktop environment by a completely new one written from scratch. I’ve read too many negative reviews about Unity already, but personally I think this is a good move. Developed from Ubuntu’s Netbook Remix (which I didn’t like much, you don’t have to maximize windows like that all the time), switched from Clutter to Compiz 3D windows compositing system (my last experiment with Clutter also hinted potential performance problems), the new Unity shows a huge shift toward Mac’s style desktop. Though there’s still lots of bugs and missing features, a few things can be said about this new Unity.

First, people complains about the break from the norms, some hates Unity because it stops their accustomed habits. The GNOME community (with its long development history) can feel like being betrayed. But for all I knew after some years in graphics, UI design is the job of personality, it’s the task of a small group who decides what is “nice and beautiful”, and which is the way (for other users) to follow, it’s not the task of a committee (that is “People’s Committee” I mean 😬). As a developer, I’m often in the self – conflicting state of wondering what is “nice and beautiful”, modify over and over again some simple UI widgets. It’s no strange that UI always becomes a huge diversity (and problems) for community – driven projects. Key requirements for a UI system in my opinion are: simplicity, consistency and elegance.

For consistency and simplicity, Unity is a big step forward, reasons given that GNOME has become too complex and inconsistent (then how about the much more complex KDE?). Maximized windows have the caption and menu bars all incorporated into the system bar on top, a feature clearly borrowed from Mac, yet further varied and developed. A simple dock bar is positioned on the left, and system menu button doesn’t bring up menu but a searching panel with which you can launch programs, open files… with a few key strokes. I like this much cause it offers a form of GNOME – Do: it’s harder to launch rarely – used items (especially if you don’t remember the names) but it’s more convenient for frequently – used ones. After all, lexical memory is much faster than spatial memory, if one has been trained to that.

One more apparent physical factor is that the vertical screen space is more scarce compared to horizontal space, and 16:9 screens has become quite popular. To conserve useful screen space, UI must make constraints to the caption, menu bars, especially tool bars, there’s even recommendations on eliminating the status bar and minimizing scroll bar at Mark Shuttleworth’s (Canonical’s founder) blog. For scroll bar, that’s quite sure a mimic and modified version taken from iOS. Actually with my recent experiences with Mac and iOS, there’s still a lot of lessons to be learned from these two OSes on how to use space efficiently: remove heavy windows decorations and borders, use lighter UI fonts, smaller and more symbolic icons, design simpler widgets… and in cases even sacrifice some less – used UI features.

The trend of UI becoming simpler and more consistent is quite obvious. Efficient uses of space is the key, but spatial is not all for user experiences: keyboard and mnemonic are also important parts of the learning path (UI effects can be fancy, but over time, people get to love the simplest keystrokes that do the job). The third factor: elegance is even more a topic of debate, people can largely agree to what is simplicity and consistency, but what is aesthetics remains mysterious! Unity claims that it would directly compete Mac on UI designs and user experiences, but my opinion is that its aesthetic aspect is still far behind that of Mac, e.g: Compiz’s effects are numerous, but actually not very fine – tuned compared to the smaller set of animations Mac offers… And even Mac still doesn’t satisfy my eyes in quite many cases…

vectorial

The classical SVG example rendered using a thin OpenVG layer on top of OpenGL (or Quartz) on a Mac. This is also to say goodbye to the old Lunar year (year of the tiger) that is ending!

inished with my survey on vectorial graphics, in details, about rendering SVG using Quartz, OpenGL (ES) on Mac, iOS and some Android flatforms. I’d had the chances to systematize more my knowledge on vectorial: path, stroke, anti – aliasing, solid, gradient and pattern fill, etc… Todays, people’s all talking about 3D, OpenGL, DirectX, etc… While few mentions much about 2D stuffs, I’ve traced back some historical evolution paths, since I believe that it’s through history would we understand technologies.

In the beginning, there was… PostScript

It was John Warnock who kindled the idea, he joined Xerox in 1978 and an early version of PostScript (named InterPress) became the language to drive a laser printer. Laser printer was then a revolutionary device, which offers extraordinary graphics compared to the capability of dot or matrix printers. Warnock left and founded Adobe in 1982, the company that produced well – known graphics softwares including Illustrator, king of the vectorial editors.

Then there was DPS, PDF and Quartz

But it was Steve Job who realized the superiority of PostScript and urged John Warnock to popularize it. When Steve Jobs left Apple and started NeXT, he co – developed with Adobe DPS – Display PostScript, a derivative of PostScript – the language that drives the NeXT computer’s graphics system. When Steve has got back to Apple, DPS then evolved into what is now known as PDF, and Quartz is the C binding that bridges traditional Unix programmers to the Mac graphics world.

The X window system

The X’s designers also started with a PostScript RedBook in hand. But due to various reasons including the lack of in – depth consensus about vectorial, X maintains until now low level of PostScript support. The X server can only handle basic PostScript commands (it can’t even draw splines). X took a hybrid approach using both vectorial and raster – based solutions to the problem. Also the Unix root has an impact: X is the only true client/server windowing system to the current day.

Until now, the NeXT computer remains an idealistic symbol, pure vectorial remains a pursuit, perhaps for higher – standard devices, such as with this Backbone:

Backbone is an attempt (our attempt) at creating a Really Good Desktop. The metric we use for “Really Good” is our own. In short, to us, to carry on the NeXTSTEP® and OPENSTEP® spirit!

The Windows’ GDI

Born to be the youngest of all graphics systems, GDI learns nothing from it predecessors. Neither it is device and resolution independent (like Macs) nor a true client/server system (like X). GDI sticks to screen and the pixel unit with quite a lot implementation flaws. These flaws won’t become obvious until we come to serious editing, publishing and printing: text documents and graphics designs would never has the on – screen – display and printing qualities we would expect, though various 3rd party softwares would come to rescue somewhat the situations.

Then, things change with time

The 2D graphics systems on Mac, Windows, Unix… all has different origins, and all targets different real – world problem domains. All has hardware acceleration to various levels and qualities, and it’s hard to compare them in some cases. To the present day, no system is known to keep the original idealistic model that uses pure PostScript: X has been mixed from the beginning, Mac & iOS have switched to raster to some extent, GDI is essentially pixel – based. Then come the wind of change! It would be another story, another evolution path, but today, 3D hardwares has become quite popular with reasonable prices. It’s counter – intuitive to treat 2D as a separate part from 3D, and the trend is merging 2D to become a subset of 3D rendering. However, the process hasn’t been very easy, it would take some more time to reach maturity:

  • WPF (Windows Presentation Foundation): first came with Windows Vista, GDI now runs as part of DirectX 3D rendering environment. Vista was not a success indeed!

  • QuartzGL: Quartz2D runs on top of OpenGL since OS X 10.2 (Jaguar). However, QuartzGL is not enabled by default even in the current version (10.6 – Snow Leopard) since it’s still quite buggy.

  • GLX & AIGLX: both has some implementation problems and is competing to each other to become the official 3D extension for X.

Taken an arbitrary GDI’s API (such as MoveTo, LineTo…) we can see the parameters’ type is integer, which is in reality the pixel unit. The Quartz’s counterparts are always in float, a virtual unit so that the APIs can be device and resolution independent.

The so called “GDI printer” is actually a bitmap device, it lacks a PostScript interpreter and hence need to be attached to a computer to do the actual computing. Reason is obvious: cheapness, adding a PostScript interpreter would significantly raise the cost!

polynomial texture mapping

The “specimen”, an oil painting of Bửu Chỉ, the prestigious Vietnamese painter.

The lighting vector (u, v, z) in the (x, y, z) coordination system as showed in the image above (the “right hand rule”): origin at center of the picture, x points to the right, y points upward and z points toward the camera.

Result: the (medical) infrared light source turned out to be a very bad choice

f you’re into computer graphics, you probably could have learned about normal mapping, bump mapping… Last week, a colleague told me about Polynomial Texture Mapping, then the experiment below was what I was doing as an exercise to learn about this PTM. In essence, PTM could produce extraordinary effects due to the fact that it makes use of many lighting data collected in real – world condition (check out some examples on the HP’s PTM page).

To make a PTM (using tools from HP Lab), you would need to photograph the “specimen” under many light directions, then combine all together in one “texture” (using PTMfitter) in which pixel values are computed from the collected data (polynomial function).

Input to the PTMfitter is a file listing images and their lighting vector (u, v, z). Note that (u, v, z) must be a normalized vector although PTMfitter only uses (u, v) at the moment. (Getting PTMfitter to run under Linux is quite tricky since it’s linked against an very out – dated version of libstdc++).

12
~/ptm/ptm_11a.jpg -0.894427191 0 0.447213595
~/ptm/ptm_12a.jpg -0.707106781 0 0.707106781
~/ptm/ptm_13a.jpg -0.447213595 0 0.894427191
~/ptm/ptm_14a.jpg 0.447213595 0 0.894427191
………………………………….

Output is the .ptm file that can be viewed with PTMviewer. The effect is really impressive, much more realistic than the normal, bump mapping usually seen. Let verify the difference, here is the .ptm and the .pl files, you would need to download PTMviewer from HP Lab (there’re versions for Wins, Linux and Mac).

We can also use PTM to create the DOF (Depth of Field) effect. The vector (u, v, z) in this case is not lighting direction anymore but the focus point the camera is shooting at. This technique is quite useful since DOF is very expensive to compute in graphics application, and this gonna be my next experiment!

whatever happened to programming?

There’s the change that I’m really worried about: that the way a lot of programming goes today isn’t any fun because it’s just plugging in magic incantations – combine somebody else’s software and start it up. It doesn’t have much creativity. I’m worried that it’s becoming too boring because you don’t have a chance to do anything much new… The problem is that coding isn’t fun if all you can do is call things out of a library, if you can’t write the library yourself. If the job of coding is just to be finding the right combination of parameters, that does fairly obvious things, then who’d want to go into that as a career? (Coders at Work – Peter Siebel, quoting Donald Knuth)

Ví dụ về chức năng đơn giản nhất: resize ảnh trên các browsers, ảnh trên: Firefox, ảnh dưới: Chrome, (ảnh được chụp lại và hiển thị nguyên kích cỡ từ 2 browser, cả 2 dùng cùng một ảnh gốc kích thước lớn). Có thể thấy chức năng resize ảnh của Firefox, IE kém rất xa so với các browser khác.

hatever happened to programming? Điều gì đã xảy ra với lập trình? Khi ở năm 1 đại học (1998), tôi có viết một 3D engine đơn giản bằng Borland C, lúc đó sách vở thiếu, internet thì chưa có. Mục tiêu đặt ra rất đơn giản: hiển thị (xoay) một đối tượng 3D đọc từ file mô tả tọa độ các đỉnh. Bằng những hình dung hình học đơn giản, tôi xây dựng một phép chiếu: giao điểm của đường thẳng từ mắt tới các đỉnh của vật thể với mặt phẳng chiếu chính là pixel được hiển thị trên màn hình. Sau đó ít lâu đọc thêm tài liệu, tôi mới biết đó gọi là: phép chiếu phối cảnh – perspective projection. Một số đoạn code released từ game Doom giúp tôi hiểu rõ hơn kỹ thuật chiếu dùng trong game thực: ray casting.

Sau đó 1 thời gian bắt đầu có internet, chúng tôi lại “say mê” tìm hiểu các kỹ thuật graphics “tối tân” hơn. Còn nhớ lúc đó có trang web winasm.net, nơi chỉ vẻ rất nhiều về Gouroud, Phong shading, light, bump mapping, sprites… Chúng tôi đọc các code viết chủ yếu bằng Assembly, rồi viết lại bằng C/C++ trong cái “engine” của mình. Chuyện là như thế, chúng tôi “nghĩ gì viết nấy”, đa số các trường hợp là “phát minh lại cái bánh xe”, bắt đầu bằng việc tự xây dựng những mô hình, thuật toán “ngây thơ” trong đầu. Cái “engine” của tôi so với các engine chuyên nghiệp bây giờ thì thật đáng xấu hổ, nhưng ít nhất chúng tôi hiểu những ý tưởng, nguyên tắc vận hành cơ bản.

Những điều kể ra trên đây chỉ mới là a, b, c… trong computer graphics, lĩnh vực có rất rất nhiều thuật toán, kỹ xảo, mánh khóe… hấp dẫn và độc đáo. Còn nhớ vì ham thích viết các chương trình đồ họa và xử lý ảnh nên một số môn khác tôi không chú ý tới. Như các môn AI, có lần tôi copy code của một người bạn về nộp làm bài tập (đã được cho phép). Đó chỉ là một chương trình chơi caro (croix-zero) dùng thuật toán A*, nhưng vì cúp cua nên tôi không biết về A*, ngồi cả đêm đọc xem cái chương trình đơn giản đó làm cái gì nhưng vẫn không tài nào hiểu được. Sau đó “bổ túc kiến thức” thì biết là A* cũng chẳng phải là điều gì phức tạp, nhưng nếu không có ý tưởng về thuật toán trong đầu… thì code cũng như một đám rừng chẳng cho anh biết được gì về nó.

Sau đó tôi viết nhiều về xử lý ảnh, tới đây thì nền tảng toán trở nên quan trọng, đa số các khái niệm là đủ phức tạp để không tự hình dung bằng trực quan trực giác được. Nhưng nghiên cứu kỹ lưỡng chút thì rồi chúng tôi cũng hiểu được ý tưởng đằng sau các khái niệm convolution, high – pass, low – pass filters… Nhờ những kiến thức đó mà bây giờ tôi có thể có được những nhận định đúng hơn, ví dụ như một chức năng tưởng chừng hết sức đơn giản như resize ảnh, nhưng cho đến tận bây giờ vẫn chưa được implement đúng trên hầu hết các trình xử lý ảnh phổ biến: GIMP, Photoshop… Có thể các bạn không tin điều này, nhưng GIMP, Photoshop, Firefox… resize ảnh bằng giải thuật tuyến tính nên có thể bỏ đi những thông tin quan trọng, và giữ lại những pixel không quan trọng.

Một số kỹ sư phần mềm nói với tôi: suốt nhiều năm đi làm chưa bao giờ anh ta phải cài đặt một thuật toán nào đã học ở trường, việc lập trình hiện tại chỉ là lắp ghép các module, library ở high – level, rất ít khi đụng đến bản chất thật low – level bên dưới. Tôi cũng đồng ý như thế, một số dự án đơn giản là làm việc ở high – level, không phải sản phẩm nào cũng đòi hỏi kỹ năng sáng tạo, đòi hỏi kỹ sư có hiểu biết sâu về các thuật toán chuyên biệt. Nhưng nói như vậy không có nghĩa là những kỹ năng đó không cần thiết và không quan trọng. Nếu không có những kỹ năng đó thì mãi mãi chúng ta chỉ làm được những công việc “làng nhàng”, thậm chí những công việc “làng nhàng” cũng đòi hỏi những khả năng know – how, tổ chức code nhất định. Trong nghề lập trình tôi vẫn ưa thích kiểu người hay “đi phát minh lại cái bánh xe”.

Thế nên mới biết là những điều nói ra càng đơn giản thì lại càng không đơn giản! Nhiều engineer trẻ tôi gặp sau này thường không có sự quan tâm tới những yếu tố đơn giản và cơ bản như thế. Dĩ nhiên những điều trên đây chẳng có gì to tát, cũng như đa số các vấn đề lập trình cũng không phải là điều gì lớn lao. Nhưng trước hết hãy cho tôi thấy bạn có hiểu biết cơ bản về vấn đề mình đang làm, đừng gọi hai vòng lặp for lồng nhau là: “thuật toán”, hãy có khả năng đọc hiểu mô tả của thuật toán và biến nó thành code chạy tốt… trước khi phán đó là những điều cơ bản không cần để ý tới. Tất cả bắt đầu với suy nghĩ của chính mình, với cái mô hình và trình tự trong đầu mình, chứ không phải bắt đầu với danh sách các khái niệm, chữ nghĩa, API… mà anh thậm chí không hiểu nội dung đằng sau nó! Sau đó thì mới có thể nói đến ý tưởng, điểm mạnh điểm yếu, chỗ dùng được (không dùng được) và sự khác biệt của các công nghệ!

Một cách rất quan trọng để hiểu công nghệ không phải là từ thuật toán, ý tưởng, mà là từ… lịch sử. Nếu quan tâm đọc lịch sử computer graphics, bạn sẽ biết đằng sau “Phong shading” là Bùi Tường Phong, một người Việt. Hay bạn sẽ hiểu tại sao SGI (Silicon Graphics Inc.) có nhiều ảnh hưởng đến thế, tại sao những phong cách, sản phẩm của công ty đó lại thiết đặt nên những tiêu chuẩn vàng trong cộng đồng computer graphics (ví dụ như OpenGL), tại sao máy Mac dùng UEFI chứ không dùng BIOS, tại sao NextStep và OpenStep dùng “vector display” chứ không phải là “bitmap display” như đa số các windowing system khác… Sự phát triển của công nghệ là một quá trình tiến hóa liên quan tới những yếu tố xã hội và con người, chứ không đơn giản là sự phức tạp hóa các ý tưởng kỹ thuật!

chinese rendering server

n my previous post, we can see the image – replacement technique being applied to mathematical formulas rendering. Replacing text by image can be seen in various Web’s techniques, mainly to display things that browser can’t! It’s a possibility that many Web technologies would never converge into common “form factors”: how many years have passed but SVG is still not supported on all browsers, how font technologies are still fighting stiffly with each other? Various issues would always remain unresolved and image replacement, though ugly and inconvenient, could be used as a temporary solution.

As you can see in the image above: the first line is a popular Chinese straight – stroke font that can be seen on most browsers, the next lines are nice calligraphy (brush – stroke) fonts that can hardly be seen on the web! I’m going to try using FreeType2 for a very specific problem: rendering Chinese fonts, the only reason is just simple: aesthetics! Searching around, I can’t find any simple, standalone solution: nice Chinese fonts are very big, a typical ttf file has size from 5MB to 50MB depending on the character set and quality (with that size, it’s obvious that we should use a server side solution). Packages like Pango or Cairo are too complex, and would require additional dependencies (which is unavailable on a free Linux host).

It takes me a whole day struggling with FreeType2’s reference and manual to get it work with Chinese fonts (quite different from conventional Latin fonts indeed), and finally here it is! You can access the executable at: http://tkxuyen.com/freetype2.php with the following syntax: freetype2.php ? text=… &font=… &size=… &color=… here is an example. Below are renderings with different sizes (anti – alias works really well):

text=洛阳城东桃李花飞来飞去落谁家&font=2&size=11&color=111111
text=洛阳城东桃李花飞来飞去落谁家&font=2&size=12&color=111111
text=洛阳城东桃李花飞来飞去落谁家&font=2&size=13&color=111111
text=洛阳城东桃李花飞来飞去落谁家&font=2&size=14&color=111111
text=洛阳城东桃李花飞来飞去落谁家&font=2&size=15&color=111111
text=洛阳城东桃李花飞来飞去落谁家&font=2&size=16&color=111111
text=洛阳城东桃李花飞来飞去落谁家&font=2&size=18&color=111111
text=洛阳城东桃李花飞来飞去落谁家&font=2&size=18&color=111111
text=洛阳城东桃李花飞来飞去落谁家&font=2&size=19&color=111111

and renderings with 3 different Chinese fonts (very big files, installed on server) and in different colors. Just note these fonts are a bit non-standard: they produce traditional Chinese characters as output, but only accept simplified Chinese as input:

text=洛阳城东桃李花飞来飞去落谁家&font=2&size=19&color=FF1111
text=洛阳城东桃李花飞来飞去落谁家&font=1&size=19&color=11FF11
text=洛阳城东桃李花飞来飞去落谁家&font=0&size=19&color=1111FF

Update, Jun 6th, 2021:

Due to some changes on my web-hosting, CGI is disabled for some reason. I really don’t have time to figure out why, so just temporarily remove the Chinese font rendering for now!

FreeType2 is an very handy open source library, it’s available on many flatform: Unix, Dos, Windows, Mac, Amiga, BeOS, Symbian… and it does a very good job of handling typefaces! Since FreeType2’s patent issues have expired since May, 2010, we would see an increasing application of FreeType2 in many areas.

This is my very simple C code (~250 LOC) to experiment with FreeType2: loading font, loading glyph, rendering bitmap, dealing with Unicode… To compile, just something like: gcc gifsave.c freetype2.c -o freetype2.cgi `pkg-config --cflags --libs freetype2`. I hope I can have time to extend the code into a more usable form: multi – line layout, alignment, RTF support, etc… Some restrictions are imposed to protect the server, if some text can’t be rendered (e.g: rendering dimensions are too large), an error image like this is displayed instead:

latex rendering server


Some example expressions rendered by MimeTeX (it’s good to appear to be smarter than you are 😬!) If an expression fails to be rendered, you would see an error image like this:

ecently, the wonderful yourequations.com site (which I’ve been using to occasionally render mathematical expressions on web pages) has ceased it’s service due to heavy traffic. I was thinking about running my own LaTeX rendering server, things turned out to be pretty easy as follow, thanks to the excellent MimeTeX package, a LaTeX reduced subset. It’s also interesting to experiment the “stone age technique” of CGI, first download and compile the package:

wget http://www.forkosh.com/mimetex.zip
unzip mimetex.zip
cd mimetex
cc -DAA mimetex.c gifsave.c -lm -o latex.cgi
# test the binary, view the ‘fermat.gif’ image
./latex.cgi -i “a^2+b^2=c^2” -e fermat.gif

Uploaded to host, latex.cgi runs without any dependencies. The ugly thing with my (free) Linux host is that although it does allow CGI, it doesn’t allow CGI to return documents of type ‘image/gif’ no matter what. To work around, I wrote a small PHP script, which parses the GET input, calls CGI to generate and save image in a cache directory, then redirects request to the LaTeX image. This also helps not to expose your CGI directly on the web too!

// use ‘system’ command to execute CGI
$cmd = “$mimetex_path -e “.$full_filename;
$cmd = $cmd.” “.escapeshellarg($formula);
system($cmd,$status_code);
$text = $pictures_path.”/”.$filename;
return $text;

I’ve been always loving CGI for its simplicity, CGI, Perl, Python… old things never die! Although I have almost no experiences with them, they let you do whatever you want to given a little tweaking know – hows. Please note that MimeTeX is not as full – featured as LaTeX, it can’t render some too – complex expressions and it uses an ugly bitmap font. If your server has some LaTeX support, consider using MathTeX, a more advanced version from the same author.

Update, Jun 6th, 2021

Due to some changes on my web-hosting, CGI is disabled for some reason. I really don’t have time to figure out why, so just temporarily remove LaTeX rendering for now!