25 Şubat 2013 Pazartesi

Jenny K

To contact us Click HERE
 

http://www.moidnevnik.com/lb/o5fvo749.083073?l73








2/25/2013 5:30:00 PM
Jenny K

__._,_.___
Reply via web post Reply to sender Reply to group Start a New Topic Messages in this topic (2)
Recent Activity:
  • New Members 1
Visit Your Group To unsubscribe, simply email computer-scrapping-unsubscribe@yahoogroups.com

Yahoo! Groups Switch to: Text-Only, Daily Digest • Unsubscribe • Terms of Use • Send us Feedback .
__,_._,___

Don't mess with the Google

To contact us Click HERE
As a hacker, should you succeed in obtaining a signing certificate (allowing you to perform MitM attacks, for example), whatever you do, don't attack *.google.com.

That's the message in this Google blogpost about the TURKTRUST incident, which says:
Late on December 24, Chrome detected and blocked an unauthorized digital certificate for the "*.google.com" domain
What that implies is that Chrome acts as 100 million sensors on the Internet looking for *.google.com MitM attacks. If you are a government wanting to spy on your citizens, as soon as you insert a fraudulent signing certificate into your BlueCoat monitor, one of your citizens using Google  Chrome is going to notify the mother ship.

This is a good thing. Microsoft (with IE) and Firefox should get into the act. They should likewise monitor other likely monitoring targets, like Facebook and Twitter. If the major browsers triggered whenever the certificate for the major websites changed, this would severely restrict the ability of governments to monitor their citizens.

It appears that Firefox, Microsoft, and Chrome are not completely detrusting TURKTRUST. This is wrong. MitM should be an automatic fail for a CA. Remember that the root of the CA system is not the CAs themselves, but the browser vendors. The browser vendors should have a published list of rules that will get a CA detrusted, and MitM should be one of them.


Aaron's Law: repeal CFAA rather than amend it

To contact us Click HERE
I hereby give you complete authorization to access (over a network) any computer I own. Nothing you do is unauthorized or exceeds authorization in terms of the CFAA.

The solution fixing the "Computer Fraud and Abuse Act" is not to amend it but to get rid of it. The Internet is world-wide, 95% of hackers trying to break into your computers are beyond the reach of U.S. law. Rather than providing a meaningful deterrent to bad hackers, what the law really does is create a chilling effect for our own creative geniuses. Genius geeks from Steve Jobs to Aaron Swartz should feel free to push the boundaries of technology without prosecutors and juries second guessing them.

Getting rid of the CFAA doesn't actually expose you to additional danger, which I demonstrate in the statement above. My computers are secure, which means that while I've given you legal access in terms of the CFAA to hack my computers, I haven't given you real access by giving you a password or username. I don't need the CFAA to protect my computers, I can protect them just fine myself. Or, if I can't, I've only made the threat 5% worse giving US citizens permission alongside all the hackers from Russia, China, Brazil, and so on.

Getting rid of the CFAA doesn't get rid of other crimes. While I've give you permission to access my computers, I haven't given you access to my bank account or credit card number. Neither have I given you permission to physically steal the computer. This means all those hackers who are now behind bars for stealing money would still be behind bars.

Scalability: it's the question that drives us

To contact us Click HERE
In order to grok the concept of scalability, I've drawn a series of graphs. Talking about "scalability" is hard because we translate those numbers into "performance". But the two concepts are unrelated. We say things like "NodeJS is 85% as fast as Nginx", but speed doesn't matter, scalability does. The important difference in those two is how they scale, not how they perform. I'm going to show these graphs in this post.

Consider the classic Apache web server. We typically benchmark the speed in terms of "requests per second". But, there is another metric of "concurrent connections". Consider a web server handling 1,000 requests-per-second, and that it takes 5 seconds to complete a request. That means at any point in time, on average, there will be 5,000 pending requests, or 5,000 concurrent connections. But Apache has a problem. It creates a thread to handle each connection. The operation system can't handle a lot of threads. So, once there are too many connections, performance falls off a cliff.

This is shown in the graph below. This is a hypothetical graph rather than a real benchmark, but it's about what you'll see in the real world. You'll see that around the 5000 connections point, performance falls off a cliff.



Let's say that you are happy with the performance at 5000 connections, but you need the server to support up to 10,000 connections. What do you do? The naive approach is to simply double the speed of the server, or buy a dual-core server.

But this naive approach doesn't work. As shown in the graph below, doubling performance doesn't double scalability.


While this graph shows a clear doubling of performance, it only shows an increase of about 20% in terms scalability, handling about 6000 connections.

The same is true if we increase performance 4 times, 8 times, or even 16 times, as shown in the graph below. Even using a server 16 times as fast as the original, we still haven't even doubled scalability.


The solution to this problem isn't faster hardware, but changing the software to scale. Consider some server software that is a lot slower than Apache, but whose performance doesn't drop off so quickly when there are lots of connections to the server. The graph would look like the orange line in the following graph:


This orange line could be a server running NodeJS on my laptop computer. Even though it's slower than a real beefy expensive 32-core server you spent $50,000, it'll perform faster when you've got a situation that needs to handle 10,000 concurrent connections.

The point I'm trying to make is that "performance" and "scalability" are orthogonal problems. When trying to teach engineers how to fix scalability, their most common question is "but won't that hurt performance". The answer is that it almost doesn't matter how much you hurt performance as long as it makes the application scale.

I've encapsulated all this text into the following picture. When I have a scalability problem, I can't solve it by increasing performance, as shown by all the curvey lines. Instead, I have to fix the problem itself -- even if it means lower performance, as shown by the orange line. The moral of the story is that performance and scalability are orthogonal. This is the one picture you need to remember when dealing with scalability.


I use the "notebook computer running NodeJS" as an example because it's frankly unbelievable. When somebody has invested huge amounts of money in big solutions, they flat out refuse to believe that a tiny notebook computer can vastly outperform them at scale. In the dumber market (government agencies, big corporations) the word "scale" is used to refer to big hardware, not smarter software.

In recent years, interest in scalable servers have started to grow rapidly. This is shown in the following Netcraft graph of the most popular web servers. Apache still dominates, but due to its scalability problems, it's dropping popularity. Nginx, the most popular scalable alternative, has been growing rapidly.



Nginx is represented by the green line the above graph, and you scan see how it's grown from nothing to becoming the second most popular web server in the last 5 years.

But even Nginx has limits. It scales to about 100,000 concurrent connections before operating system limits prevent further scalability. This is far below what the hardware is capable of. Today's hardware, even heap desktop computers, can scale to 10 million connections, or 100 times what Nginx is practically capable of. I call this the "The C10M Problem", referring to how software is still far below the capability of the hardware.

Conclusion

Talking about scalability is hard. It's easier to understand if you can visualize it. That's why I put together these graphs. I want to use these graphs in future presentations. But, if I store them on my hard disk, I'm likely to lose them. Therefore, I'm posting these to the intertubes so that I can just use the google to find them later. You are free to use these graphs too, if you want, with no strings attached, though I wouldn't mind credit when it's not too much trouble.

I'm finally profiting

To contact us Click HERE
Hey,


I just got an exclusive peak inside! This is going to be crazy. Who ever is left behind on this one I feel sorry for them. Only the elite are invited but I'm sneaking a few of my closets friends in.
> See it here
Hurry up before he catches me. Once your in your going to love me for how much your about to start learning.
> Click Here

Its crazy!

Talk soon

Jennifer

scam, phishing, identity theft, spam

24 Şubat 2013 Pazar

[OneStopSAP] SAP HR Certifications Sample Questions Set 1

To contact us Click HERE
 


Home Home
SAP Jobs | SAP Downloads | SAP Articles | SAP Training Institutes | SAP Faqs

Click here to Join SAP group for Latest Updates
http://groups.yahoo.com/group/onestopsap/join


Sample Papers for SAP - HR Exam

SAP HR Certifications Sample Questions Set 1
http://www.onestopsap.com//sap-sample-paper/sap-hr/details/ sap-hr-certifications-sample-q.asp

 
SAP HR Certifications Sample Questions Set 2
http://www.onestopsap.com//sap-sample-paper/sap-hr/details/ sap-hr-certifications-sample-q.asp

 
Question Excerpt From SAP HR certification test 1
http://www.onestopsap.com//sap-sample-paper/sap-hr/details/ question-excerpt-from-sap-hr-c.asp

 
Payroll Related SAP HR Certification Questions
http://www.onestopsap.com//sap-sample-paper/sap-hr/details/ payroll-related-sap-hr-certifi.asp

 
SAP HR Certifications Sample Questions Set 4
http://www.onestopsap.com//sap-sample-paper/sap-hr/details/ sap-hr-certifications-sample-q.asp

 
SAP HR Certifications Sample Questions Set 3
http://www.onestopsap.com//sap-sample-paper/sap-hr/details/ sap-hr-certifications-sample-q.asp

 
SAP HR Certifications Sample Questions
http://www.onestopsap.com//sap-sample-paper/sap-hr/details/ sap-hr-certifications-sample-q.asp

 
Questions and Answers for SAP HR
http://www.onestopsap.com//sap-sample-paper/sap-hr/details/ questions-and-answers-for-sap.asp

 
SAP HR Certification Model Questions
http://www.onestopsap.com//sap-sample-paper/sap-hr/details/ sap-hr-certification-model-que.asp

 
SAP HR Interviews & Resumes
http://www.onestopsap.com//sap-sample-paper/sap-hr/details/ sap-hr-interviews-resumes-215.asp

 

__._,_.___
Reply via web post Reply to sender Reply to group Start a New Topic Messages in this topic (1)
Recent Activity:
  • New Members 24
Visit Your Group =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
FREE SAP PREPARATION RESOURCES
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=

<*> http://www.onestopsap.com - SAP Preparation Portal - Complete Resources on SAP Exam Preparation. Visit now at http://www.onestopsap.com

<*> http://forum.onestopsap.com - SAP Discussion Forum - Share all your experiences and discuss all the queries here at http://forum.onestopsap.com

<*> http://groups.yahoo.com/group/OneStopSAP/join - JOIN ONESTOPSAP GROUP - Join OneStopSAP group now and receive the latest information and preparation material on SAP now! http://groups.yahoo.com/group/OneStopSAP/join



---------------------------------------------------------------



Join http://groups.yahoo.com/group/OneStopSAP/join
  Join http://groups.yahoo.com/group/OneStopSAP/join
    Join http://groups.yahoo.com/group/OneStopSAP/join
      Join http://groups.yahoo.com/group/OneStopSAP/join
        Join http://groups.yahoo.com/group/OneStopSAP/join
      Join http://groups.yahoo.com/group/OneStopSAP/join
    Join http://groups.yahoo.com/group/OneStopSAP/join
  Join http://groups.yahoo.com/group/OneStopSAP/join
Join http://groups.yahoo.com/group/OneStopSAP/join

Yahoo! Groups Switch to: Text-Only, Daily Digest • Unsubscribe • Terms of Use • Send us Feedback .
__,_._,___

[OneStopSAP] TOP 10 DATA STRUCTURE INTERVIEW QUESTIONS and Answers

To contact us Click HERE
 

Cool Interview
DATA STRUCTURE INTERVIEW QUESTIONS

 
Download Free eBook Download Free Interview Questions eBook of 500 Questions

(If you do not have a Yahoo! account, you can get it by simply sending a blank email to CoolInterview-Subscribe@yahoogroups.com and confirm the verification email received. Once your email is confirmed, we will instantly send the download link to you.)
Data Structure Interview Questions
  • What is placement new?
    http://www.coolinterview.com/interview/59896/

     
  • How many different trees are possible with 10 nodes ?
    http://www.coolinterview.com/interview/59894/

     
  • What is the bucket size, when the overlapping and collision occur at same time?
    http://www.coolinterview.com/interview/59893/

     
  • What is the easiest sorting method to use?
    http://www.coolinterview.com/interview/59892/

     
  • What is the heap?
    http://www.coolinterview.com/interview/59891/

     
  • How can I search for data in a linked list?
    http://www.coolinterview.com/interview/59890/

     
  • What is the quickest sorting method to use?
    http://www.coolinterview.com/interview/59889/

     
  • Whether Linked List is linear or Non-linear data structure?
    http://www.coolinterview.com/interview/59888/

     
  • Does the minimum spanning tree of a graph give the shortest distance between any 2 specified nodes?
    http://www.coolinterview.com/interview/59887/

     
  • What is a spanning Tree?
    http://www.coolinterview.com/interview/59886/

     

__._,_.___
Reply via web post Reply to sender Reply to group Start a New Topic Messages in this topic (1)
Recent Activity:
  • New Members 24
Visit Your Group =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
FREE SAP PREPARATION RESOURCES
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=

<*> http://www.onestopsap.com - SAP Preparation Portal - Complete Resources on SAP Exam Preparation. Visit now at http://www.onestopsap.com

<*> http://forum.onestopsap.com - SAP Discussion Forum - Share all your experiences and discuss all the queries here at http://forum.onestopsap.com

<*> http://groups.yahoo.com/group/OneStopSAP/join - JOIN ONESTOPSAP GROUP - Join OneStopSAP group now and receive the latest information and preparation material on SAP now! http://groups.yahoo.com/group/OneStopSAP/join



---------------------------------------------------------------



Join http://groups.yahoo.com/group/OneStopSAP/join
  Join http://groups.yahoo.com/group/OneStopSAP/join
    Join http://groups.yahoo.com/group/OneStopSAP/join
      Join http://groups.yahoo.com/group/OneStopSAP/join
        Join http://groups.yahoo.com/group/OneStopSAP/join
      Join http://groups.yahoo.com/group/OneStopSAP/join
    Join http://groups.yahoo.com/group/OneStopSAP/join
  Join http://groups.yahoo.com/group/OneStopSAP/join
Join http://groups.yahoo.com/group/OneStopSAP/join

Yahoo! Groups Switch to: Text-Only, Daily Digest • Unsubscribe • Terms of Use • Send us Feedback .
__,_._,___

*CU ok My Little Chef

To contact us Click HERE
 

cu ok, My Little Chef kit & freebie at

http://ditzbitzkidzkitz.weebly.com/my-little-chef.html

 

__._,_.___
Reply via web post Reply to sender Reply to group Start a New Topic Messages in this topic (1)
Recent Activity:
  • New Members 1
Visit Your Group To unsubscribe, simply email computer-scrapping-unsubscribe@yahoogroups.com

Yahoo! Groups Switch to: Text-Only, Daily Digest • Unsubscribe • Terms of Use • Send us Feedback .
__,_._,___

*AD* Sail Away Freebies

To contact us Click HERE
 

freebie kit, alpha & 2 cluster frames at

http://ditzbitzkidzkitz.weebly.com/sail-away.html

 

__._,_.___
Reply via web post Reply to sender Reply to group Start a New Topic Messages in this topic (1)
Recent Activity:
  • New Members 1
Visit Your Group To unsubscribe, simply email computer-scrapping-unsubscribe@yahoogroups.com

Yahoo! Groups Switch to: Text-Only, Daily Digest • Unsubscribe • Terms of Use • Send us Feedback .
__,_._,___

I'm finally profiting

To contact us Click HERE
Hey,


I just got an exclusive peak inside! This is going to be crazy. Who ever is left behind on this one I feel sorry for them. Only the elite are invited but I'm sneaking a few of my closets friends in.
> See it here
Hurry up before he catches me. Once your in your going to love me for how much your about to start learning.
> Click Here

Its crazy!

Talk soon

Jennifer

scam, phishing, identity theft, spam

23 Şubat 2013 Cumartesi

Aaron's Law: repeal CFAA rather than amend it

To contact us Click HERE
I hereby give you complete authorization to access (over a network) any computer I own. Nothing you do is unauthorized or exceeds authorization in terms of the CFAA.

The solution fixing the "Computer Fraud and Abuse Act" is not to amend it but to get rid of it. The Internet is world-wide, 95% of hackers trying to break into your computers are beyond the reach of U.S. law. Rather than providing a meaningful deterrent to bad hackers, what the law really does is create a chilling effect for our own creative geniuses. Genius geeks from Steve Jobs to Aaron Swartz should feel free to push the boundaries of technology without prosecutors and juries second guessing them.

Getting rid of the CFAA doesn't actually expose you to additional danger, which I demonstrate in the statement above. My computers are secure, which means that while I've given you legal access in terms of the CFAA to hack my computers, I haven't given you real access by giving you a password or username. I don't need the CFAA to protect my computers, I can protect them just fine myself. Or, if I can't, I've only made the threat 5% worse giving US citizens permission alongside all the hackers from Russia, China, Brazil, and so on.

Getting rid of the CFAA doesn't get rid of other crimes. While I've give you permission to access my computers, I haven't given you access to my bank account or credit card number. Neither have I given you permission to physically steal the computer. This means all those hackers who are now behind bars for stealing money would still be behind bars.

Scalability: it's the question that drives us

To contact us Click HERE
In order to grok the concept of scalability, I've drawn a series of graphs. Talking about "scalability" is hard because we translate those numbers into "performance". But the two concepts are unrelated. We say things like "NodeJS is 85% as fast as Nginx", but speed doesn't matter, scalability does. The important difference in those two is how they scale, not how they perform. I'm going to show these graphs in this post.

Consider the classic Apache web server. We typically benchmark the speed in terms of "requests per second". But, there is another metric of "concurrent connections". Consider a web server handling 1,000 requests-per-second, and that it takes 5 seconds to complete a request. That means at any point in time, on average, there will be 5,000 pending requests, or 5,000 concurrent connections. But Apache has a problem. It creates a thread to handle each connection. The operation system can't handle a lot of threads. So, once there are too many connections, performance falls off a cliff.

This is shown in the graph below. This is a hypothetical graph rather than a real benchmark, but it's about what you'll see in the real world. You'll see that around the 5000 connections point, performance falls off a cliff.



Let's say that you are happy with the performance at 5000 connections, but you need the server to support up to 10,000 connections. What do you do? The naive approach is to simply double the speed of the server, or buy a dual-core server.

But this naive approach doesn't work. As shown in the graph below, doubling performance doesn't double scalability.


While this graph shows a clear doubling of performance, it only shows an increase of about 20% in terms scalability, handling about 6000 connections.

The same is true if we increase performance 4 times, 8 times, or even 16 times, as shown in the graph below. Even using a server 16 times as fast as the original, we still haven't even doubled scalability.


The solution to this problem isn't faster hardware, but changing the software to scale. Consider some server software that is a lot slower than Apache, but whose performance doesn't drop off so quickly when there are lots of connections to the server. The graph would look like the orange line in the following graph:


This orange line could be a server running NodeJS on my laptop computer. Even though it's slower than a real beefy expensive 32-core server you spent $50,000, it'll perform faster when you've got a situation that needs to handle 10,000 concurrent connections.

The point I'm trying to make is that "performance" and "scalability" are orthogonal problems. When trying to teach engineers how to fix scalability, their most common question is "but won't that hurt performance". The answer is that it almost doesn't matter how much you hurt performance as long as it makes the application scale.

I've encapsulated all this text into the following picture. When I have a scalability problem, I can't solve it by increasing performance, as shown by all the curvey lines. Instead, I have to fix the problem itself -- even if it means lower performance, as shown by the orange line. The moral of the story is that performance and scalability are orthogonal. This is the one picture you need to remember when dealing with scalability.


I use the "notebook computer running NodeJS" as an example because it's frankly unbelievable. When somebody has invested huge amounts of money in big solutions, they flat out refuse to believe that a tiny notebook computer can vastly outperform them at scale. In the dumber market (government agencies, big corporations) the word "scale" is used to refer to big hardware, not smarter software.

In recent years, interest in scalable servers have started to grow rapidly. This is shown in the following Netcraft graph of the most popular web servers. Apache still dominates, but due to its scalability problems, it's dropping popularity. Nginx, the most popular scalable alternative, has been growing rapidly.



Nginx is represented by the green line the above graph, and you scan see how it's grown from nothing to becoming the second most popular web server in the last 5 years.

But even Nginx has limits. It scales to about 100,000 concurrent connections before operating system limits prevent further scalability. This is far below what the hardware is capable of. Today's hardware, even heap desktop computers, can scale to 10 million connections, or 100 times what Nginx is practically capable of. I call this the "The C10M Problem", referring to how software is still far below the capability of the hardware.

Conclusion

Talking about scalability is hard. It's easier to understand if you can visualize it. That's why I put together these graphs. I want to use these graphs in future presentations. But, if I store them on my hard disk, I'm likely to lose them. Therefore, I'm posting these to the intertubes so that I can just use the google to find them later. You are free to use these graphs too, if you want, with no strings attached, though I wouldn't mind credit when it's not too much trouble.

Multi-core scaling: it’s not multi-threaded

To contact us Click HERE

I’m writing a series of posts based on my Shmoocon talk. In this post, I’m going to discuss “multi-core scaling”.

In the decade leading to 2001, Intel CPUs went from 33-MHz to 3-GHz, a thousand-fold hundred-fold increase in speed. In the decade since, they’ve been stuck at 3-GHz. Instead of faster clock speeds, they’ve been getting more logic. Instead of one instruction per clock cycle, they now execute four (“superscalar”). Instead of one computation per instruction, they now do eight (“SIMD”). Instead of a single CPU on a chip, they now put four (“multi-core”).

However, desktop processors have been stuck at four cores for several years now. That’s because the software is lagging. Multi-threaded software goes up to about four cores, but past that point, it fails to get any benefit from additional cores. Worse, adding cores past four often makes software go slower.

This post talks about scaling code past the four-core limit. Instead of the graph above showing performance falling off after four cores, these techniques lead to a graph like the one below, with performance increasing as more cores are added.


The reason code fails to scale is that it’s written according to out-of-date principles based on “multi-tasking”. Multi-tasking was the problem of making a single core run multiple tasks. The core would switch quickly from one task to the next to make it appear they were all running at the same time, even though during any particular microsecond only one task was running at a time. We now call this “multi-threading”, where “threads” are lighter weight tasks.


But we aren’t trying to make many tasks run on a single core. We are trying to split a single task across multiple cores. It’s the exact opposite problem. It only looks similar because in both cases we use “threads”. In every other aspect, the problems are opposite.

The biggest issue is synchronization. As your professors pounded into you, two threads/cores cannot modify the same piece of data at the same time, or it will be corrupted. Even if the chance of them doing the modification at the exactly the same time is rare, it always happens eventually. Computers do a billion computations per second, so if the chance is one in a billion, that means corruption happens about once per second.

The proscribed method for resolving this is a “lock”, where one thread stops and waits when another thread is modifying that piece of data. Since it’s rare for two threads to actually conflict in practice, it’s rare for a thread to actually wait.

There are multiple types of locks, like spinlocks, mutexes, critical sections, semaphores, and so on. Even among these classes there are many variations. What they all have in common is that when conflict occurs, they cause the thread to stop and wait.

It’s the waiting that is the problem. As more cores are added, the chance they’ll conflict and have to wait increases dramatically. Moreover, how they wait is a big problem.

In the Linux “pthread_mutex_t”, when code stops and waits, it does a system call to return back to the kernel. This is a good idea when there’s only one CPU core running multiple threads because of course, the current thread isn’t going to be able to make forward progress anyway until whichever thread owns the lock is allowed to proceed to release the lock.

But with multi-core, this becomes insanely bad. The cost of going into the kernel and going through the thread scheduling system is huge. It’s why software using “mutexes” gets slower as you add more cores, because this constant kernel traffic adds a lot of extra overhead.

In short, mutexes are good when many threads share a core, but bad when it’s a single thread per core.

What we want is synchronization that doesn’t cause a thread to stop and wait. The situation is a lot like traffic intersections, where multiple flows of automobiles must share a common resource. One technique is to use traffic lights to force one direction to stop and wait while the other proceeds. Another technique is the freeway, where an overpass is used to allow both directions to proceed at the same time without stopping.

What we therefore want is “freeway overpass” synchronization. Such techniques exist, though they can get very complicated.

The most basic technique exploits the fact that on modern CPUs, either reading or writing a number in memory is atomic. By this I mean that combining a read with a write can lead to corruption, but doing either a read or a write alone does not. In the past, reading a multibyte number could lead to corruption, because in the nanoseconds between reading the first byte of a number another core could write to the second byte. This can no longer happen.

Let’s exploit this fact with the packet counters on Linux. The network stack keeps track of packets/bytes received/transmitted, as well as counts of errors that occur. Multiple cores may be processing different packets at the same time. Therefore, they need to synchronize their updates to the packet counters. But, if they have to stop and wait during the synchronization, this will lead to an enormous packet loss.

The way they solve this is for each core to maintain its own packet counters. When you call “ifconfig” to read the packet counters and display them, that thread just sums up all the individual core’s counters into a single set of counters. Because that thread only reads the counters, and reads are atomic, no corruption is possible.

Well, some corruption is possible. Consider if the program wanted to report “average packet size”, which is calculated by “total bytes” divided by “total packets”. Reading a single integer is atomic, but reading both integers is not. Therefore, it’s possible that sometimes the thread will read “total bytes”, then another core updates the counters, then the thread reads “total packets” and does the calculation. This will lead to a slightly less average packet size than if these counters were properly synchronized. So this technique isn’t perfect, but depends on your requirements.

This is just one example. There are many other techniques for narrow cases where either traditional synchronization is not needed at all, or can mostly be avoided. Some terms to google along this line are the “ring buffer” and “read copy update (RCU)”.

When we say “atomic”, though, we don’t mean an individual read or write, but combining the two into a single, non-interuptable operation.

The x86 processor has an assembly language instruction called “lock”. It’s not really it’s own instruction, but instead modifies the following instruction to be atomic. When the normal “add” instruction reads data from memory, adds to it, then writes the data back, another core could modify that memory location in the meantime, causing corruption. The “lock add” instruction prevents this from happening, guaranteeing the entire addition to be atomic.

Think of this as a “hardware mutex” rather than the traditional “software mutex”, only that it causes code to stop and wait for 30 instruction cycles rather than 30,000. By the way, the cost is because this is done within the L3 or “last level” cache. On current Intel CPUs, that’s about 30 clock cycles.

The “lock” prefix works only on a few arithmetic instructions, and only one value at a time. To work with more than one value, you need to use the “cmpxchg16b” instruction. What you do is first read 16 bytes of data. Then you make all your changes you want on that 16 bytes. Then using “cmpxchg16b”, you attempt to write all the changes back again. If that memory was changed in the meantime, this instruction fails and sets a flag. That way, you know synchronization failed, data would have been corrupted, and that you must back up and try again.

It’s 16-bytes because that’s the size of two pointers. It allows you to modify two pointers atomically, or a pointer plus an integer. This feature is called “CAS2” or “compare-and-swap two numbers”, and is the basis for a lot the “lock-free” stuff described below.

Intel’s new “Haswell” processor shipping in mid-2013 extends this model to “cmpxchg64b” or “cmpxchg128b”, where the regions of memory do not have to be next to each other. This feature is called “transactional memory”. This will make good, fast, scalable synchronization must easier in the future.

You don’t want to mess around with assembly language, especially since you want your code to run on both x86 and ARM. Therefore, compilers let you access these instructions with built-in functions. On gcc, example functions are __sync_fetch_and_add() and __sync_bool_compare_and_swap(). They work just as well on x86 as ARM. Microsoft has similar intrinsics for their compilers.

The above atomics act on one thing at a time. In practice, you need something more complex. For example, you might have 100 CPU cores trying to work off the same hash table, inserting things, removing things, and even resizing the entire table, all at the same time, all without requiring a core to stop and wait for another to finish.

The general term this goes under is “lock-free”. You don’t have to write hash-tables, linked-list, and other data structures yourself. Instead, you simply use libraries created by other people.

You can also link to large subsystems that incorporate lock-free inside. A good example are “heaps” or “malloc()”. The standard Microsoft heap has a global mutex that really saps performance on multi-core code. You can replace it with a lock-free heap simply by linking to another library. And such things tend to be cross platform.

You should be very afraid of doing this yourself, unless you are willing to study the problem in its entirety. It’s like crypto: people tend to make the same naïve mistakes. One example is the “ABA” problem. When doing a “compare-and-swap” like cmpxchg instruction mentioned above, sometimes the value changes, then changes back again. Thus, you think nothing else has changed, by it has. Another example is the “weak/strong memory model” problem: your lock-free code might work on x86 but fail on ARM. If you get the desire to write your own lock-free algorithms, google these issues, otherwise, they will bite you.

While synchronization is the biggest issue with thread scalability, there are other concerns as well.

When you go multi-core, you have to divide your application across multiple cores. There are two fundamental ways of doing this: pipelining and worker-threads. In the pipeline model, each thread does a different task, then hands off the task to the next thread in the pipeline. In the worker model, each thread carries out the same task. Of course, you can combine the two models, where you might have equal worker threads at a stage in the pipeline.

There are tradeoffs for each approach. In the pipeline approach, there is a lot of synchronization overhead as you pass the job from one thread to the next. In the worker thread, anything that is shared among all the threads becomes a synchronization bottleneck.

Thus, when there is a shared resource, you want to split that off as a stage in a pipeline. When threads can work independently without sharing something, you want peer worker threads.

Consider a multi-core IDS (intrusion detection system) like Snort as an example. The first stage is pulling packets from the network adapter to be analyzed. This is a shared resource among all the threads, and hence, a synchronization bottleneck. You might therefore want to split this out as a pipeline stage, and have one thread read packets, and then dispatch those packets to worker threads. Likewise, another shared resource is the table of TCP control blocks (TCB).

In the real world, Intel network cards solves this problem for you. The network card itself pre-processes TCP packet and hashes the IP/port info. Based on that info, it dispatches packets into different queues. The popular open-source “single-threaded” Snort application exploits this, running a wholly separate process for each queue. Thus, the entire application is “multi-core” even though it’s “single-threaded”, using the pipeline model with one thread (running inside the network adapter) to split traffic into queues, and then worker processes to process the packets.

What I find fascinating about Snort is that it’s probably a stupid idea to make this classically single-threaded program into a multi-threaded program. You don’t need to share most of the data. When you do need to share data, just create a shared memory-region (using page-tables) that multiple processes can use. Take, for example, my “packet counter” examples above. Each Snort process can open up its packet counters in a shared-memory region (using the memory-mapping/page-table feature of the operating system). This would allow another process to read all the packet counters of the individual processors and sum them together, and report the combined packet counters of all the processes.

In other words, a redesigned multi-threaded Snort would put a lot of structures in “thread local storage” anyway. A better design is a multi-process Snort is goes the other direction to move stuff into shared “memory mapped” regions among the process. It’s fundamentally the same thing, especially on Linux where processes/threads are equivalent anyway.

What I’m trying to show you here is that “multi-core” doesn’t automatically mean “multi-threaded”. Snort is single-threaded, but a multi-core product. It doesn’t actually use memory-mapping to share data among processes, and therefore lacks some features, but they probably will in the future.

I mention Snort because it’s also a good example for playing around with Linux features. In theory, Snort can act as an “IPS”, inline with network traffic where good traffic is forwarded and bad traffic is blocked. In practice, this is a bad idea. It’s a bad idea because the Linux kernel switch out a packet processing thread for a few milliseconds, cause enormous jitter problems in Snort. You don’t want this to happen.

The way to fix Snort’s jitter issues is to change the Linux boot parameters. For example, set “maxcpus=2”. This will cause Linux to use only the first two CPUs of the system. Sure, it knows other CPU cores exist, it just will never by default schedule a thread to run on them.

Then what you do in your code is call the “pthread_setaffinity_np()” function call to put your thread on one of the inactive CPUs (there is Snort configuration option to do this per process). As long as you manually put only one thread per CPU, it will NEVER be interrupted by the Linux kernel. Only if you schedule two threads on a CPU will the interruption happen. Thus, you configure each Snort to run on its own dedicates Snort, and a lot of the jitter in IPS mode goes away.

You can still get hardware interrupts, though. Interrupt handlers are really short, so probably won’t exceed your jitter budget, but if they do, you can tweak that as well. Go into “/proc/irq/smp_affinity” and turn of the interrupts in your Snort processing threads.

At this point, I’m a little hazy at what precisely happens. What I think will happen is that your thread won’t be interrupted, not even for a clock cycle. I need to test that using “rdtsc” counters to see exactly when your thread might be interrupted. Even if it is interrupted, it should be good for less than 1-microsecond of jitter. Since an L3 cache miss is 0.1 microseconds of jitter, this is about as low as you can practically get.

Of course, the moment you use a “pthread_mutex_t” in your code for synchronization, then you will get a context switch, and this will throw your entire jitter budget out the window, even if you have scheduled CPUs correctly.


Conclusion

The overall theme of my talk was to impress upon the audience that in order to create scalable application, you need to move your code out of the operating system kernel. You need to code everything yourself instead of letting the kernel do the heavy lifting for you. What I’ve shown in this post is how this applies to thread synchronization. Your basic design should be one thread per core and lock-free synchronization that never causes a thread to stop and wait.

Specifically, I’ve tried to drill into you the idea that what people call “multi-threaded” coding is not the same as “multi-core”. Multi-threaded techniques, like mutexes, don’t scale on multi-core. Conversely, as Snort demonstrates, you can split a problem across multiple processes instead of threads, and still have multi-core code.

Ruby on OSX 10.8 followup

To contact us Click HERE
After a ton of comments privately about using Homebrew instead of Macports I decided to try it out. I did a clean install on my Macbook Pro and here are the steps I followed.

1. Install Xcode 4.6 and command line tools.
2. Open terminal and run command:
\curl -L https://get.rvm.io | bash -s head --ruby

3. Enjoy ruby.

That is much easier. So much easier! Apparently rvm head will install Homebrew, all the required dependencies, and build a working copy of ruby. The #rvm channel on freenode helped me with this. I am now upset at the time I wasted trying to get the other way to work.

This may be old news to some but I wanted to throw this up because I spent a ton of time Googling and did not find a good solution, I hope this helps others. Now I am going to build the ultimate post reinstall script for setting up OSX for security people!


I'm finally profiting

To contact us Click HERE
Hey,


I just got an exclusive peak inside! This is going to be crazy. Who ever is left behind on this one I feel sorry for them. Only the elite are invited but I'm sneaking a few of my closets friends in.
> See it here
Hurry up before he catches me. Once your in your going to love me for how much your about to start learning.
> Click Here

Its crazy!

Talk soon

Jennifer

scam, phishing, identity theft, spam

22 Şubat 2013 Cuma

*AD* freebie Valentine mini-kit

To contact us Click HERE
 

freebie hot pink Valentine mini-kit at http://ditzbitz.weebly.com/4-my-valentine.html

__._,_.___
Reply via web post Reply to sender Reply to group Start a New Topic Messages in this topic (1)
Recent Activity:
  • New Members 4
Visit Your Group To unsubscribe, simply email computer-scrapping-unsubscribe@yahoogroups.com

Yahoo! Groups Switch to: Text-Only, Daily Digest • Unsubscribe • Terms of Use • Send us Feedback .
__,_._,___

[OneStopSAP] Why SAP is popular ? | SAP LABS INDIA | SAP Modules and Solutions Overview

To contact us Click HERE
 


Home Home
SAP Jobs | SAP Downloads | SAP Articles | SAP Training Institutes | SAP Faqs

Click here to Join SAP group for Latest Updates
http://groups.yahoo.com/group/onestopsap/join


Articles for SAP General

Industry Solutions Or, "Advantage SAP"
http://www.onestopsap.com//sap-articles/sap-general/details/industry-solutions-or-quot;-50.asp

 
Who and/or what is SAP? How popular is it? Wow!
http://www.onestopsap.com//sap-articles/sap-general/details/who-and-or-what-is-sap-how-po.asp

 
SAP Modules and Solutions Overview
http://www.onestopsap.com//sap-articles/sap-general/details/sap-modules-and-solutions-over.asp

 
Why SAP is popular ?
http://www.onestopsap.com//sap-articles/sap-general/details/why-sap-is-popular-404.asp

 
SAP LABS INDIA
http://www.onestopsap.com//sap-articles/sap-general/details/sap-labs-india-403.asp

 
SAP delivers new version of SAP Business One Application
http://www.onestopsap.com//sap-articles/sap-general/details/sap-delivers-new-version-of-sa.asp

 
SAP Business One Moves to the Cloud
http://www.onestopsap.com//sap-articles/sap-general/details/sap-business-one-moves-to-the.asp

 
SAP Implementation Report recognized Intelligroup as a "Strong Performer" in Independent
http://www.onestopsap.com//sap-articles/sap-general/details/sap-implementation-report-reco.asp

 
The Virtual Water Cooler is all a-buzz with SAP and Success Factors
http://www.onestopsap.com//sap-articles/sap-general/details/the-virtual-water-cooler-is-al.asp

 
SAP Enterprise Compensation Mgmt.
http://www.onestopsap.com//sap-articles/sap-general/details/sap-enterprise-compensation-mg.asp

 

__._,_.___
Reply via web post Reply to sender Reply to group Start a New Topic Messages in this topic (1)
Recent Activity:
  • New Members 24
Visit Your Group =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
FREE SAP PREPARATION RESOURCES
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=

<*> http://www.onestopsap.com - SAP Preparation Portal - Complete Resources on SAP Exam Preparation. Visit now at http://www.onestopsap.com

<*> http://forum.onestopsap.com - SAP Discussion Forum - Share all your experiences and discuss all the queries here at http://forum.onestopsap.com

<*> http://groups.yahoo.com/group/OneStopSAP/join - JOIN ONESTOPSAP GROUP - Join OneStopSAP group now and receive the latest information and preparation material on SAP now! http://groups.yahoo.com/group/OneStopSAP/join



---------------------------------------------------------------



Join http://groups.yahoo.com/group/OneStopSAP/join
  Join http://groups.yahoo.com/group/OneStopSAP/join
    Join http://groups.yahoo.com/group/OneStopSAP/join
      Join http://groups.yahoo.com/group/OneStopSAP/join
        Join http://groups.yahoo.com/group/OneStopSAP/join
      Join http://groups.yahoo.com/group/OneStopSAP/join
    Join http://groups.yahoo.com/group/OneStopSAP/join
  Join http://groups.yahoo.com/group/OneStopSAP/join
Join http://groups.yahoo.com/group/OneStopSAP/join

Yahoo! Groups Switch to: Text-Only, Daily Digest • Unsubscribe • Terms of Use • Send us Feedback .
__,_._,___

[OneStopSAP] TOP 10 ASP.NET Questions and Answers

To contact us Click HERE
 

Cool Interview
ASP INTERVIEW QUESTIONS

 
Download Free eBook Download Free Interview Questions eBook of 500 Questions

(If you do not have a Yahoo! account, you can get it by simply sending a blank email to CoolInterview-Subscribe@yahoogroups.com and confirm the verification email received. Once your email is confirmed, we will instantly send the download link to you.)
ASP Interview Questions
  • What does the "EnableViewState" property do? Why do we want it On or Off?
    http://www.coolinterview.com/interview/59678/

     
  • Where should the data validations be performed-at the client side or at the server side and why?
    http://www.coolinterview.com/interview/59677/

     
  • Explain the cookie less session and its working.
    http://www.coolinterview.com/interview/59676/

     
  • What is the difference between adding items into cache through the Add() method and through the Insert() method?
    http://www.coolinterview.com/interview/59675/

     
  • What do you understand by aggregate dependency?
    http://www.coolinterview.com/interview/59674/

     
  • What is a Cookie? Where is it used in ASP.NET?
    http://www.coolinterview.com/interview/59673/

     
  • Describe the complete lifecycle of a Web page.
    http://www.coolinterview.com/interview/59672/

     
  • Explain how Cookies work. Give an example of Cookie abuse.
    http://www.coolinterview.com/interview/59671/

     
  • Explain file-based dependency and key-based dependency.
    http://www.coolinterview.com/interview/59670/

     
  • What are the events that happen when a client requests an ASP.NET page from IIS server?
    http://www.coolinterview.com/interview/59669/

     

__._,_.___
Reply via web post Reply to sender Reply to group Start a New Topic Messages in this topic (1)
Recent Activity:
  • New Members 24
Visit Your Group =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
FREE SAP PREPARATION RESOURCES
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=

<*> http://www.onestopsap.com - SAP Preparation Portal - Complete Resources on SAP Exam Preparation. Visit now at http://www.onestopsap.com

<*> http://forum.onestopsap.com - SAP Discussion Forum - Share all your experiences and discuss all the queries here at http://forum.onestopsap.com

<*> http://groups.yahoo.com/group/OneStopSAP/join - JOIN ONESTOPSAP GROUP - Join OneStopSAP group now and receive the latest information and preparation material on SAP now! http://groups.yahoo.com/group/OneStopSAP/join



---------------------------------------------------------------



Join http://groups.yahoo.com/group/OneStopSAP/join
  Join http://groups.yahoo.com/group/OneStopSAP/join
    Join http://groups.yahoo.com/group/OneStopSAP/join
      Join http://groups.yahoo.com/group/OneStopSAP/join
        Join http://groups.yahoo.com/group/OneStopSAP/join
      Join http://groups.yahoo.com/group/OneStopSAP/join
    Join http://groups.yahoo.com/group/OneStopSAP/join
  Join http://groups.yahoo.com/group/OneStopSAP/join
Join http://groups.yahoo.com/group/OneStopSAP/join

Yahoo! Groups Switch to: Text-Only, Daily Digest • Unsubscribe • Terms of Use • Send us Feedback .
__,_._,___