For Tile Based Rendering: This is still truly not the same thing. Just splitting a Scene into tiles to render it on multiple GPUs doesn't make the GPU a true Tile-based Renderer. For that, the pipeline has to be significantly modified (or let's say: completely redesigned), because it renders top-down instead of bottom-up. This is also, what makes Hidden Surface Removal so much more efficient on a Tile-based Renderer.
However, true TBR also CAN have something to do with multi-chip rendering, as PowerVR has stated years back, that their TBR would do well scaling across multiple chips (each one rendering a set of tiles). Actually they even built such a thing for Arcade Machines. [
See here].
Quote:The PowerVR architecture allows content developers to create a single game for a variety of system platforms. Family members include PCX1, an integrated single-chip solution for personal computers and a multi-chip solution for video arcade machines
Both nVidia and ATi (and s3, Matrox, SiS, ...) refused to completely redesign their architectures for true TBR, they just optimized their actual brute-force renderer designs to be more and more efficient.
Note, that TBR is also known as "Deferred Rendering", so that term might help when googleing that up. (nV/ATi use "Immediate Rendering"). Don't confuse that with deferred Shading though!
As for the original question: I guess, when it comes to advanced shaders, that need data from portions of the screen that are not being rendered on the current GPU (e.g. Pixel Shader A running on VSA 1 needs the color of a pixel, which has currently only been drawn on VSA 2...), they might have just done AFR. I'm not sure about the problems in implementation and about bandwith requirements, but handling such shader dependencies over the SLI link seems rather complicated and cumbersome to me, whereas this is not needed in AFR, as the whole framebuffer exists in both GPU's local memory, not just half like it is when doing Scanline Interleaving..