It appears that Amazon has leaked the price of the PS3 by setting up a preview for it on their site. They are not yet accepting pre-orders, but the price IS set at $299.99!!! View the Page Here.
It appears that Amazon has leaked the price of the PS3 by setting up a preview for it on their site. They are not yet accepting pre-orders, but the price IS set at $299.99!!! View the Page Here.
Video Game Technician in Jamaica 3968609
Ultra Aluminus Black Case/ASUS Rampage Formula 2 /Intel Core 2 Quad Q6600 oc 3.6/ZALMAN CNPS9700/8GB G-Skill ripjaw DDR3 1600/EVGA GeForce GTX super oc 470 edition /SAMSUNG Black 32" Widescreen hdtv/etc/
nah tek that to mean nothing since the ps3 will be more powerful than the x360 and everbody knows it will sell like hotcakes at 500+
~ >>>| Windows 7 - Tips, Tricks, Info |<<< ~ >>>| Nvidia GTX 275 vs AMD HD 4890 |<<< ~ >>>| This Week's Releases |<<< ~
~ >>>| Forum posting & you (video) |<<< ~ >>>| How To Behave On An Internet Forum (video) |<<< ~
Wow cheaper than the Xbox.. if that is so mi getting one of them.Originally Posted by **ScarFace**
That is still unbelievable.
Vision without Mission is Daydreaming!
Actually you need to get some facts before you make those assumptions. I am currently doing some hardware reviews on the two hardwares myself and Xbox 360 so far is far surpassing the PS3.Originally Posted by BlaqMale
Here is a report the compares both hardware and shows you for all the major components who is performs better at what.
http://xbox360.ign.com/articles/617/617951p1.html
Free Thinkers are those who are willing to use their minds without fearing to understand things that clash with their own customs, beliefs for privileges. This state of mind is not common, but it is essential for right thinking; where it is absent, discussion is apt to become worse than useless.
i remember reading differently but i'll wait till final ps3 specs b4 i retract my statement, point is though that for the first 6 months atleast sony could probably sell all their stock at 500+ based on brand loyalty alone but hey, at 299 there is nothing that would stop me from coping one.
~ >>>| Windows 7 - Tips, Tricks, Info |<<< ~ >>>| Nvidia GTX 275 vs AMD HD 4890 |<<< ~ >>>| This Week's Releases |<<< ~
~ >>>| Forum posting & you (video) |<<< ~ >>>| How To Behave On An Internet Forum (video) |<<< ~
Originally Posted by Nastrodamus
LOL LOL LOL!! Nastro thats a M$ paid for study and I think its a bit dated. I would disregard such studies.
Here some info on Xenon CPU(Xbox 360) and the Cell CPU(PS3)
http://arstechnica.com/articles/paedia/cpu/cell-1.ars - Introducing the IBM/Sony/Toshiba Cell Processor — Part I: the SIMD processing units
http://arstechnica.com/articles/paedia/cpu/cell-2.ars - Introducing the IBM/Sony/Toshiba Cell Processor -- Part II: The Cell ArchitectureThe CELL's SIMD processing unit
As you can see, IBM has eliminated the instruction window and its attendant control logic, in favor of adding more storage space and more execution hardware. A Cell SPE doesn't do register renaming or instruction reording, so it needs neither a rename register file or a reorder buffer. The actual architecture of the Cell SPE is a dual-issue, statically scheduled SIMD processor with a large local storage (LS) area. In this respect, the individual SPUs are like very simple, PowerPC 601-era processors.
The main differences between an individual SPE and an early RISC machine are twofold. First, and most obvious, is the fact that the Cell SPE is geared for single-precision SIMD computation. Most of its arithmetic instructions operate on 128-bit vectors of four 32-bit elements. So the execution core is packed with vector ALUs, instead of the traditional fixed-point ALUs. The second difference, and this is perhaps the most important, is that the L1 cache has been replaced by 256K of locally addressable memory. The SPE's ISA, which is not VMX/Altivec-derivative (more on this below), includes instructions for using the DMA controller to move data between main memory and local storage. The end result is that each SPE is like a very small vector computer, with its own "CPU" and RAM.
This RAM functions in the role of the L1 cache, but the fact that it is under the explicit control of the programmer means that it can be simpler than an L1 cache. The burden of managing the cache has been moved into software, with the result that the cache design has been greatly simplified. There is no tag RAM to search on each access, no prefetch, and none of the other overhead that accompanies a normal L1 cache. The SPEs also move the burden of branch prediction and code scheduling into software, much like a VLIW design.
The SPE's very simple front end can take in two instructions at a time, check to see if they can operate in parallel, and then issue them either in parallel or in program order. These two instructions then travel down one of two pipes, "even" or "odd," to be executed. After execution, they're put back in sequence (if necessary) by the very simple commit unit and their results are written back to local memory. The individual SPUs can throw a lot overboard, because they rely on a regular, general-purpose POWERPC processor core to do all the normal kinds of computation that it takes to run regular code. The Cell system features eight of these SPUs all hanging off a central bus, with one 64-bit POWERPC core handling all of the regular computational chores. Thus all of the Cell 's "smarts" can reside either on the PPC core, while the SPUs just do the work that's assigned to them.
To sum up, IBM has sort of reapplied the RISC approach of throwing control logic overboard in exchange for a wider execution core and a larger storage area that's situated closer to the execution core. The difference is that instead of the compiler taking up the slack (as in RISC), a combination of the compiler, the programmer, some very smart scheduling software, and a general-purpose CPU do the kind of scheduling and resource allocation work that the control logic used to do.
http://arstechnica.com/articles/paed.../xbox360-1.ars - Inside the Xbox 360, part I: procedural synthesis and dynamic worldsThe Cell's basic architecture
The basic architecture of the Cell is described by IBM as a "system on a chip" (SoC) design. This is a perfectly good characterization, but I'd take it even further and call Cell a "network on a chip." As I described yesterday, the Cell's eight SPUs are essentially full-blown vector "computers," insofar as they are fairly simple CPUs with their own local storage.
These small vector computers are connected to each other and to the 512KB L2 cache via a element interface bus (EIB) that consists of four sixteen-byte data rings with 64-bit tags. This bus can transfer 96 bytes/cycle, and can handle over 100 outstanding requests.
The individual SPEs can use this bus to communicate with each other, and this includes the transfer of data in between SPEs acting as peers on the network. The SPEs also communicate with the L2 cache, with main memory (via the MIC), and with the rest of the system (via the BIC). The onboard memory interface controller (MIC) supports the new Rambus XDR memory standard, and the BIC (which I think stands for "bus interface controller" but I'm not 100% sure) has a coherent interface for SMP and a non-coherent interface for I/O.
http://arstechnica.com/articles/paed.../xbox360-2.ars - Inside the Xbox 360, Part II: the Xenon CPUDynamic worlds
Since at least 2003, Microsoft has been talking up the idea of "procedural synthesis" in games. In addition to the information on the technique provided in interviews, Microsoft has filed a very detailed patent that outlines how this will work on the Xbox 360. Since the unvieling of the new console and the confirmation of many details that had previously been only unconfirmed rumor, it's now possible to read this patent with the actual Xbox 360 implementation details in mind in order to learn exactly what Microsoft's ideas for procedural synthesis are and how those ideas function in their next-generation console.
In a nutshell, procedural synthesis is about making optimal use of system bandwidth and main memory by dynamically generating lower-level geometry data from statically stored higher-level scene data.
The PPE's pipeline and performance
In its overall design approach the PPE shares a surprising number of similarities with the Netburst architecture that powers Intel's P4. In particular, I once described the P4's overall design philosophy as "narrow and deep," as opposed to the "wide and shallow" approach of the G4 and its kin. The PPE shares the P4's "narrow and deep" approach to performance, but with a few very important differences.
The PPE's pipeline is 21 stages deep, the same number of pipeline stages as the Pentium 4 Northwood core. This 21-stage pipeline looks like a relic of an age prior to the recent multicore revolution, when clockspeed was still king. Still, it makes a certain amount of sense given the fact that the core was obviously designed to pack a lot of streaming media performance into a small on-die footprint. How can deep pipelines pack a lot of performance into a small die size? Let me explain.
Deep pipelines are well suited to high-bandwidth streaming media processors, where the code stream is fairly compact and serial at the individual thread level and the datasets are large, uniform, and data-prefetch-friendly. So the PPE's general approach to performance is to execute serial instructions twice as fast instead of parallel instructions two at a time. This rapid-fire, serial execution approach means that each PPE takes up less space than it would if it were designed with ILP (and execution core width) in mind, but it also means that code stream parallelism must come from running multiple threads at once (as described in the previous section).
Deep pipelining also allows a computer architect to increase the number of instructions that the processor can hold and simultaneously execute by stacking more instructions into the same amount of hardware. So a machine with deeper pipelines may have fewer execution units, but it can have more instructions in various stages of execution simultaneously. Thus the more deeply pipelined processor could theoretically be smaller than a comparable "wide and shallow" design while holding a greater number of instructions, because it does more with less. I say "theoretically," because more deeply pipelined processors usually aren't any smaller in real life than their less deeply pipelined counterparts.
This brings me back to a statement I made at the beginning of this section, where I said that the PPE's deep pipeline was designed to pack a lot of streaming media performance into a small amount of die space. At that point, you were probably thinking that deeply pipelined processors like the Pentium 4 tend to have large die sizes, and if you were thinking so then you were right. However, a deep pipeline doesn't have to spell a large die size, if you throw out the instruction window along with most of the hardware that's intended to help the deep pipeline work well with branchy code.
too much reading, what's the conclusion maf?
~ >>>| Windows 7 - Tips, Tricks, Info |<<< ~ >>>| Nvidia GTX 275 vs AMD HD 4890 |<<< ~ >>>| This Week's Releases |<<< ~
~ >>>| Forum posting & you (video) |<<< ~ >>>| How To Behave On An Internet Forum (video) |<<< ~
Actually not really....Originally Posted by leoandru
As a techie you should be aware of the most important components when making a system (esp. for gaming).
Processor, Memory, Video Card and FrontSide Bus, everything else is a matter of nice to have and not necessarily need to have.
Compare those crucial components been the X360 and PS3. * Note that x360 is using a PPC type processor.
Free Thinkers are those who are willing to use their minds without fearing to understand things that clash with their own customs, beliefs for privileges. This state of mind is not common, but it is essential for right thinking; where it is absent, discussion is apt to become worse than useless.
yeah and the PS3 have one PPC processor controling a nine core processor.Originally Posted by Nastrodamus
Well to me its not so much about technical comparison cause why tell me the max bandwidth of the memory if I aint gonna need all of it.. It simply boilds down to which system gives me the better gaming experience.