Page 1 of 3 123 LastLast
Results 1 to 10 of 21

Thread: Ps3 Price Leaked!!!!

  1. #1
    Join Date
    Jan 2005
    Posts
    875
    Rep Power
    0

    Exclamation Ps3 Price Leaked!!!!

    It appears that Amazon has leaked the price of the PS3 by setting up a preview for it on their site. They are not yet accepting pre-orders, but the price IS set at $299.99!!! View the Page Here.
    Video Game Technician in Jamaica 3968609
    Ultra Aluminus Black Case/ASUS Rampage Formula 2 /Intel Core 2 Quad Q6600 oc 3.6/ZALMAN CNPS9700/8GB G-Skill ripjaw DDR3 1600/EVGA GeForce GTX super oc 470 edition /SAMSUNG Black 32" Widescreen hdtv/etc/

  2. #2
    Join Date
    Dec 2004
    Posts
    4,316
    Rep Power
    0

    Default

    nah tek that to mean nothing since the ps3 will be more powerful than the x360 and everbody knows it will sell like hotcakes at 500+

  3. #3
    Join Date
    Jan 2003
    Posts
    1,137
    Rep Power
    0

    Default

    Quote Originally Posted by **ScarFace**
    It appears that Amazon has leaked the price of the PS3 by setting up a preview for it on their site. They are not yet accepting pre-orders, but the price IS set at $299.99!!! View the Page Here.
    Wow cheaper than the Xbox.. if that is so mi getting one of them.

    That is still unbelievable.
    Vision without Mission is Daydreaming!

  4. #4
    Join Date
    Feb 2003
    Posts
    4,163
    Rep Power
    0

    Default

    Quote Originally Posted by BlaqMale
    nah tek that to mean nothing since the ps3 will be more powerful than the x360 and everbody knows it will sell like hotcakes at 500+
    Actually you need to get some facts before you make those assumptions. I am currently doing some hardware reviews on the two hardwares myself and Xbox 360 so far is far surpassing the PS3.

    Here is a report the compares both hardware and shows you for all the major components who is performs better at what.

    http://xbox360.ign.com/articles/617/617951p1.html
    Free Thinkers are those who are willing to use their minds without fearing to understand things that clash with their own customs, beliefs for privileges. This state of mind is not common, but it is essential for right thinking; where it is absent, discussion is apt to become worse than useless.

  5. #5
    Join Date
    Dec 2004
    Posts
    4,316
    Rep Power
    0

    Default

    i remember reading differently but i'll wait till final ps3 specs b4 i retract my statement, point is though that for the first 6 months atleast sony could probably sell all their stock at 500+ based on brand loyalty alone but hey, at 299 there is nothing that would stop me from coping one.

  6. #6
    Join Date
    Oct 2004
    Posts
    4,814
    Rep Power
    24

    Default

    Quote Originally Posted by Nastrodamus
    Actually you need to get some facts before you make those assumptions. I am currently doing some hardware reviews on the two hardwares myself and Xbox 360 so far is far surpassing the PS3.

    Here is a report the compares both hardware and shows you for all the major components who is performs better at what.

    http://xbox360.ign.com/articles/617/617951p1.html

    LOL LOL LOL!! Nastro thats a M$ paid for study and I think its a bit dated. I would disregard such studies.

  7. #7
    Join Date
    Oct 2003
    Posts
    925
    Rep Power
    0

    Default Xenon CPU(Xbox 360) VS Cell CPU(PS3)

    Here some info on Xenon CPU(Xbox 360) and the Cell CPU(PS3)

    http://arstechnica.com/articles/paedia/cpu/cell-1.ars - Introducing the IBM/Sony/Toshiba Cell Processor — Part I: the SIMD processing units

    The CELL's SIMD processing unit

    As you can see, IBM has eliminated the instruction window and its attendant control logic, in favor of adding more storage space and more execution hardware. A Cell SPE doesn't do register renaming or instruction reording, so it needs neither a rename register file or a reorder buffer. The actual architecture of the Cell SPE is a dual-issue, statically scheduled SIMD processor with a large local storage (LS) area. In this respect, the individual SPUs are like very simple, PowerPC 601-era processors.

    The main differences between an individual SPE and an early RISC machine are twofold. First, and most obvious, is the fact that the Cell SPE is geared for single-precision SIMD computation. Most of its arithmetic instructions operate on 128-bit vectors of four 32-bit elements. So the execution core is packed with vector ALUs, instead of the traditional fixed-point ALUs. The second difference, and this is perhaps the most important, is that the L1 cache has been replaced by 256K of locally addressable memory. The SPE's ISA, which is not VMX/Altivec-derivative (more on this below), includes instructions for using the DMA controller to move data between main memory and local storage. The end result is that each SPE is like a very small vector computer, with its own "CPU" and RAM.

    This RAM functions in the role of the L1 cache, but the fact that it is under the explicit control of the programmer means that it can be simpler than an L1 cache. The burden of managing the cache has been moved into software, with the result that the cache design has been greatly simplified. There is no tag RAM to search on each access, no prefetch, and none of the other overhead that accompanies a normal L1 cache. The SPEs also move the burden of branch prediction and code scheduling into software, much like a VLIW design.

    The SPE's very simple front end can take in two instructions at a time, check to see if they can operate in parallel, and then issue them either in parallel or in program order. These two instructions then travel down one of two pipes, "even" or "odd," to be executed. After execution, they're put back in sequence (if necessary) by the very simple commit unit and their results are written back to local memory. The individual SPUs can throw a lot overboard, because they rely on a regular, general-purpose POWERPC processor core to do all the normal kinds of computation that it takes to run regular code. The Cell system features eight of these SPUs all hanging off a central bus, with one 64-bit POWERPC core handling all of the regular computational chores. Thus all of the Cell 's "smarts" can reside either on the PPC core, while the SPUs just do the work that's assigned to them.

    To sum up, IBM has sort of reapplied the RISC approach of throwing control logic overboard in exchange for a wider execution core and a larger storage area that's situated closer to the execution core. The difference is that instead of the compiler taking up the slack (as in RISC), a combination of the compiler, the programmer, some very smart scheduling software, and a general-purpose CPU do the kind of scheduling and resource allocation work that the control logic used to do.
    http://arstechnica.com/articles/paedia/cpu/cell-2.ars - Introducing the IBM/Sony/Toshiba Cell Processor -- Part II: The Cell Architecture

    The Cell's basic architecture
    The basic architecture of the Cell is described by IBM as a "system on a chip" (SoC) design. This is a perfectly good characterization, but I'd take it even further and call Cell a "network on a chip." As I described yesterday, the Cell's eight SPUs are essentially full-blown vector "computers," insofar as they are fairly simple CPUs with their own local storage.

    These small vector computers are connected to each other and to the 512KB L2 cache via a element interface bus (EIB) that consists of four sixteen-byte data rings with 64-bit tags. This bus can transfer 96 bytes/cycle, and can handle over 100 outstanding requests.



    The individual SPEs can use this bus to communicate with each other, and this includes the transfer of data in between SPEs acting as peers on the network. The SPEs also communicate with the L2 cache, with main memory (via the MIC), and with the rest of the system (via the BIC). The onboard memory interface controller (MIC) supports the new Rambus XDR memory standard, and the BIC (which I think stands for "bus interface controller" but I'm not 100% sure) has a coherent interface for SMP and a non-coherent interface for I/O.
    http://arstechnica.com/articles/paed.../xbox360-1.ars - Inside the Xbox 360, part I: procedural synthesis and dynamic worlds

    Dynamic worlds
    Since at least 2003, Microsoft has been talking up the idea of "procedural synthesis" in games. In addition to the information on the technique provided in interviews, Microsoft has filed a very detailed patent that outlines how this will work on the Xbox 360. Since the unvieling of the new console and the confirmation of many details that had previously been only unconfirmed rumor, it's now possible to read this patent with the actual Xbox 360 implementation details in mind in order to learn exactly what Microsoft's ideas for procedural synthesis are and how those ideas function in their next-generation console.

    In a nutshell, procedural synthesis is about making optimal use of system bandwidth and main memory by dynamically generating lower-level geometry data from statically stored higher-level scene data.
    http://arstechnica.com/articles/paed.../xbox360-2.ars - Inside the Xbox 360, Part II: the Xenon CPU

    The PPE's pipeline and performance
    In its overall design approach the PPE shares a surprising number of similarities with the Netburst architecture that powers Intel's P4. In particular, I once described the P4's overall design philosophy as "narrow and deep," as opposed to the "wide and shallow" approach of the G4 and its kin. The PPE shares the P4's "narrow and deep" approach to performance, but with a few very important differences.

    The PPE's pipeline is 21 stages deep, the same number of pipeline stages as the Pentium 4 Northwood core. This 21-stage pipeline looks like a relic of an age prior to the recent multicore revolution, when clockspeed was still king. Still, it makes a certain amount of sense given the fact that the core was obviously designed to pack a lot of streaming media performance into a small on-die footprint. How can deep pipelines pack a lot of performance into a small die size? Let me explain.

    Deep pipelines are well suited to high-bandwidth streaming media processors, where the code stream is fairly compact and serial at the individual thread level and the datasets are large, uniform, and data-prefetch-friendly. So the PPE's general approach to performance is to execute serial instructions twice as fast instead of parallel instructions two at a time. This rapid-fire, serial execution approach means that each PPE takes up less space than it would if it were designed with ILP (and execution core width) in mind, but it also means that code stream parallelism must come from running multiple threads at once (as described in the previous section).

    Deep pipelining also allows a computer architect to increase the number of instructions that the processor can hold and simultaneously execute by stacking more instructions into the same amount of hardware. So a machine with deeper pipelines may have fewer execution units, but it can have more instructions in various stages of execution simultaneously. Thus the more deeply pipelined processor could theoretically be smaller than a comparable "wide and shallow" design while holding a greater number of instructions, because it does more with less. I say "theoretically," because more deeply pipelined processors usually aren't any smaller in real life than their less deeply pipelined counterparts.

    This brings me back to a statement I made at the beginning of this section, where I said that the PPE's deep pipeline was designed to pack a lot of streaming media performance into a small amount of die space. At that point, you were probably thinking that deeply pipelined processors like the Pentium 4 tend to have large die sizes, and if you were thinking so then you were right. However, a deep pipeline doesn't have to spell a large die size, if you throw out the instruction window along with most of the hardware that's intended to help the deep pipeline work well with branchy code.

  8. #8
    Join Date
    Dec 2004
    Posts
    4,316
    Rep Power
    0

    Default

    too much reading, what's the conclusion maf?

  9. #9
    Join Date
    Feb 2003
    Posts
    4,163
    Rep Power
    0

    Default

    Quote Originally Posted by leoandru
    LOL LOL LOL!! Nastro thats a M$ paid for study and I think its a bit dated. I would disregard such studies.
    Actually not really....

    As a techie you should be aware of the most important components when making a system (esp. for gaming).

    Processor, Memory, Video Card and FrontSide Bus, everything else is a matter of nice to have and not necessarily need to have.

    Compare those crucial components been the X360 and PS3. * Note that x360 is using a PPC type processor.
    Free Thinkers are those who are willing to use their minds without fearing to understand things that clash with their own customs, beliefs for privileges. This state of mind is not common, but it is essential for right thinking; where it is absent, discussion is apt to become worse than useless.

  10. #10
    Join Date
    Oct 2004
    Posts
    4,814
    Rep Power
    24

    Default

    Quote Originally Posted by Nastrodamus
    Actually not really....

    As a techie you should be aware of the most important components when making a system (esp. for gaming).

    Processor, Memory, Video Card and FrontSide Bus, everything else is a matter of nice to have and not necessarily need to have.

    Compare those crucial components been the X360 and PS3. * Note that x360 is using a PPC type processor.
    yeah and the PS3 have one PPC processor controling a nine core processor.
    Well to me its not so much about technical comparison cause why tell me the max bandwidth of the memory if I aint gonna need all of it.. It simply boilds down to which system gives me the better gaming experience.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •