Page 2 of 7 FirstFirst 1234 ... LastLast
Results 11 to 20 of 66

Thread: Socks V4

  1. #11
    Join Date
    Jan 2005
    Posts
    3,151
    Rep Power
    0

    Default

    My point exaclty, if the language is so great, why not develop their os in it?

    Quick answer: It is not efficient.
    Quote Originally Posted by leoandru
    When it comes to efficiency most ppl don't notice the difference of a few milli seconds.
    Try to think outside of the box. A few milliseconds here and there

    do add up when the amount of transactions are great.

    Example:

    A VB calculator: Grrrreat
    A VB DVD Ripper: NotGreat

    Regards,
    Pogi Tuner.

  2. #12
    Join Date
    Oct 2004
    Posts
    4,814
    Rep Power
    24

    Default

    Quote Originally Posted by pogi_2nr
    Quick answer: It is not efficient.
    Try to think outside of the box. A few milliseconds here and there

    do add up when the amount of transactions are great.

    Example:

    A VB calculator: Grrrreat
    A VB DVD Ripper: NotGreat

    Regards,
    Pogi Tuner.
    I still don't agree.. C# is compiled completly to native code before executing. The code is cache for future execution, so on the next run there is no re-compiling.

    Now when this .exe is run (commandprompt><filename>[.exe]) the MSIL is now compiled into CPU-specific code by a Just-In-Time compiler (JITer). The runtime provides one or more JIT compilers for each computer architecture that runtime operates on. This feature allows developers to write a set of MSIL that can be JIT compiled and executed on computers with different architectures. The CLR doesn’t provide interpreter to interpret the MSIL Code. Instead the MSIL Code will always run native by compiling the whole MSIL at one shot to the machine code of the underlying computer architecture.
    http://www.c-sharpcorner.com/Article...arpProgram.asp

    The difference in milliseconds is caused by the initial compilation.
    Last edited by leoandru; Apr 25, 2005 at 01:43 PM.

  3. #13
    Join Date
    Jan 2005
    Posts
    3,151
    Rep Power
    0

    Default

    Quote Originally Posted by leoandru
    The difference in milliseconds is caused by the initial compilation.
    silly silly

    What we need here to settle this is a proof of concept. Otherwise, lets agree to disagree.

    Regards,
    Pogi Tuner.

  4. #14
    Join Date
    Oct 2004
    Posts
    4,814
    Rep Power
    24

    Default

    Alright.. we disagree!. when i find i nice "proof of concept" i'll post it.. from my searches i don't think enough research has gone into this theory.

  5. #15
    Join Date
    Oct 2004
    Posts
    4,814
    Rep Power
    24

    Default

    here is a good benchmarks of Java, C and C# http://www.shudo.net/jit/perf/

  6. #16
    Join Date
    Jan 2005
    Posts
    3,151
    Rep Power
    0

    Default

    I dont fully understand those benchmarks but from what I am seeing it looks

    like C# is dragging its feet in those graphs .

  7. #17
    Join Date
    Oct 2004
    Posts
    4,814
    Rep Power
    24

    Default

    alright pogi u win ... but C# isn't terrribly inefficient as u make it seem and thats all im saying. It may not be suited to develop mission critical system but its well suited to develop desktop app and network services.

    Anyways im going to add the socks version 5 and authetication protocols to that code.. HeHe i need it to get to servers that are blocked at work.

  8. #18
    Join Date
    Jan 2005
    Posts
    3,151
    Rep Power
    0

    Default

    I never said it was terribly inefficient and didnt have any use. I think it would be very
    useful for developing applications in a single user environment that is not processing large
    amounts of data .

    One scenario I am very familiar with are shell providers. The processes that execute
    on these boxes are not mission critical, but there are many of them waiting for
    cpu time. This is where the little bit of inefficiency adds up to a non-responsive system.
    These systems, normally have over 100 detached user processes constantly processing
    data. Case 1. the processes were implimented in C which is the norm, case 2. C# was used.
    Also let us imagine that the C implimentation requires 10cycles of cpu time to process
    1 unit of data and the C# requires only 5 additional cycles to process the same data.
    In a single user environment that difference is a joke but for that medium sized
    multiuser environment in my scenario that would equate to a 50% jump in load.

    My hands are too cold to continue typing the rest, you get the gist.

    Regards,
    Pogi Tuner.

  9. #19
    Join Date
    Oct 2004
    Posts
    4,814
    Rep Power
    24

    Default

    I understand. It all comes down to the implementation of the language and the programmer but if you remove the coder from the scenario then you can look at it from this side; C is over 20 years old. It has been around long enough where i beleive its implementations are optimized to the max. While C# on the other hand still has a long way to go. There is room for much improvements where optimization is concerned and i wouldn't be surprised if it will be used in mission critical system some years ahead.

  10. #20
    Join Date
    Jan 2005
    Posts
    3,151
    Rep Power
    0

    Default

    The paradigm is the problem, OO is just generally slower than procedural imo.

    I once wrote a c++ program ran it was not satisfied at all with it performance wise.
    I rewrote the same program in c ran it and it ran at the speed expected. Now I am not
    knocking OO in any way shape or form, I am just saying its not the fastest around..

    Regards,
    Pogi Tuner.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •