The alertable wait / APC


The difference between thread-safety and re-entrancy

What happens if you allocate with vector "new[]" and free with scalar "delete"?

Retail code debugging

$vframe tells you the 'virtual frame pointer'. This is the memory address where you can find the stack frame. If the function has a true stack frame, memory will be layed out like the below table. $vframe is extremely helpful when retail debugging because it tells you where about on the stack to look for your local variables.

The 32-bit x86 calling conventions

The following example shows the results of making a function call using various calling conventions. This example is based on the following function skeleton.

void MyFunc(char c, short s, int i, double f)
MyFunc ('x', 12, 8192, 2.7183);

Nice callstack frame.

Windows keyed events

This is when keyed events were born.  They were added to Windows XP as a new kernel object type, and there is always one global event \KernelObjects\CritSecOutOfMemoryEvent, shared among all processes.  There is no need for any of your code to initialize or create it—it’s always there and always available, regardless of the amount of resources on the machine.  Having it there always adds a single HANDLE per process, which is a very small price to pay for the benefit that comes along with it.  If you dump the handles with !handle in WinDbg, you’ll always see one of type KeyedEvent.  Well, what does it do?

  • EnterCriticalSection

  • InitializeCriticalSectionAndSpinCount

  • \KernelObjects\CritSecOutOfMemoryEvent,guid,db9f8f5b-8d1d-44b0-afbd-3eadde24b678.aspx

Where possible, prefer structured lifetimes of threads

Where possible, prefer structured lifetimes: ones that are local, nested, bounded, and deterministic. This is true no matter what kind of lifetime we're considering, including object lifetimes, thread or task lifetimes, lock lifetimes, or any other kind. Prefer scoped locking, using RAII lock-owning objects (C++, C# via using) or scoped language features (C# lock, Java synchronized). Prefer scoped tasks, wherever possible, particularly for divide-and-conquer and similar strategies where structuredness is natural. Unstructured lifetimes can be perfectly appropriate, of course, but we should be sure we need them because they always incur at least some cost in each of code complexity, code clarity and maintainability, and run-time performance. Where possible, avoid slippery spaghetti code, which becomes all the worse a nightmare to build and maintain when the lifetime issues are amplified by concurrency.

concurrent_queue and concurrent_vector

concurrent_queue<T> is very similar to std::queue<T> and it offers push, try_pop interfaces and ‘unsafe’ iterators and size accessors (these aren’t threadsafe during concurrent pushes and pops).

concurrent_vector<T> is most similar to a std::vector<T> and it offers a push_back method that is internally synchronized across threads and allows efficient thread safe growth of the vector. Like std::vector, concurrent_vector has random access iterators, but unlike std::vector, the guarantee of contiguous storage is removed and there are no insert and erase methods.

What (doesn't) means thread-safe?

My point is not that the definition is wrong; as informal definitions of thread safety go, this one is not terrible. Rather, my point is that the definition indicates that the concept itself is completely vague and essentially means nothing more than "behaves correctly in some situations". Therefore, when I'm asked "is this code thread safe?" I always have to push back and ask "what are the exact threading scenarios you are concerned about?" and "exactly what is correct behaviour of the object in every one of those scenarios?"

Carl Daniel 20 Oct 2009 7:19 AM

I have to agree with Eric on this one.  While there are good definitions of "thread safe", there is no universal agreement on which of the many good definitions is "correct".  Correctness, of course, depends on the circumstances.  As a result, the term "thread safe" with no further qualification is vague at best, dangerously misleading at worst.

Thread safety is somewhat analogous to exception safety.  The C++ community has settled on a multi-tier defintion of "exception safe" - I would propse that a similar family of thread safety guarantees would be a useful addition to the dialog.  The mathematical defintion that Alun provided above sounds like a good candiate for being "the strong guarantee".  At another extreme, a class that's documented as providing a single method that can be invoked by less than 3 threads under specific circumstances would be an example of a very weak guarantee.

The problem with the very stong guarantee that Alun provided - just like the stong exception safety guarantee in C++ - is that most cases don't require a guarantee that strong, and generally speaking, providing such a strong guarantee is more difficult and less efficient (naturally, there are always exceptions).

Google - Kreator budynków


Multithreaded File I/O

What This All Means
Overall, the results show that multithreaded file I/O can both improve or decrease performance significantly. Keep in mind that an application typically does not only read data, but also processes the data read in a more ore less CPU-intensive way. This leads to different results for every application and even tasks within a application. This also may or may not be the case for writing data. Furthermore, there are very different ways in how and when files will be read or written, as well as different hardware and software configurations that a application will meet. There is no general advice software developers can follow. For example, in one application I measured clearly that using multiple threads per sequential read file increased performance significantly in the 64-bit version. But with the 32-bit version more threads decreased performance on the same machine, the same operating system (Windows XP x64) and the same source code. In another case, where an application opened and appended thousands of files, the best solution was to create 8 threads that did nothing but close files (on a average dual-core machine).


CodeAnalyst, Cache Optimization, Data cashe misses

Reliable Windows Heap Exploits

Bypassing Browser Memory Protections

Over the past several years, Microsoft has implemented a number of memory protection mechanisms with the goal of preventing the reliable exploitation of common software vulnerabilities on the Windows platform. Protection mechanisms such as GS, SafeSEH, DEP and ASLR complicate the exploitation of many memory corruption vulnerabilities and at first sight present an insurmountable obstacle for exploit developers.

In this paper we will discuss the limitations of all aforementioned protection mechanisms and will describe the cases in which they fail. We aim to show that the protection mechanisms in Windows Vista are particularly ineffective for preventing the exploitation of memory corruption vulnerabilities in browsers. This will be demonstrated with a variety of exploitation techniques that can be used to bypass the protections and achieve reliable remote code execution in many
different circumstances.

Designing Applications for High Performance

If you find there is a need to use recursion on locks, then it means you don’t know when the lock is held. The lack of knowledge makes it impossible to minimize the lock hold time because you don’t know when it was held. This is a common problem with Object-Oriented design.

Designing Applications for High Performance

.NET 4 Cancellation Framework

In many prevailing systems, cancellation has been a secondary feature that rarely gets treated in sufficient detail to enable all of the above principles in a comprehensive fashion. The new types introduced to .NET 4 raise cancellation to be a primary concept for .NET APIs and one that can be cleanly and easily incorporated into any system.

Windows Performance Analysis Tools

Eliminate False Sharing? Wrong!

What does naive programmer think about it? Hmmm... Let's see... I use "fast" non-blocking interlocked operations. Good!... Hmmm... False sharing. Let's see... Hmmm... Here is no false sharing. Good! So my program fully conforms to recommendations of experts.

Rubbish! It's a dead-slow, completely non-scalable program.

Kto wierzy w złoto?

Polecam do przeczytania

The End of the GPU Roadmap

Design for Manycore Systems

Erase-remove idiom

A common programming task is to remove all elements that have a certain value or fulfill a certain criteria from a collection. In C++, this could be achieved using a hand-written loop. It is, however, preferred to use an algorithm from the C++ Standard Library for such tasks.

The algorithms library provides the remove and remove_if algorithms for this. Because these algorithms operate on a range of elements denoted by two forward iterators, they have no knowledge of the underlying container or collection. Thus, the elements are not actually removed from the range, merely moved to the end. When all the removed elements are at the end of the range, remove returns an iterator pointing one past the last unremoved element.

Polowanie na święty grall - concurrent collection

Poluje na pojemnik dający się bezproblemowo używać w programie wielowątkowym. Musi spełniać następujące parametry:

  • template'owy

  • nieblokujące dodawanie, złożoność najwyżej O(logN), najlepiej O(1)

  • nieblokujące kasowanie, złożoność nahwyżej O(n), najlepiej O(1)

  • możliwość iterowania po aktualnym "snapshocie" kolekcji

Czyli coś w rodzaju garbage collector'a z lepszą możliwością iteracji.

Monitor (synchronization)

In concurrent programming, a monitor is an object intended to be used safely by more than one thread. The defining characteristic of a monitor is that its methods are executed with mutual exclusion. That is, at each point in time, at most one thread may be executing any of its methods. This mutual exclusion greatly simplifies reasoning about the implementation of monitors compared with code that may be executed in parallel.


Crysis na wbudowanej karcie graficznej?
Pewnie tak to będzie wyglądać za jakiś czas. Na telewizorze albo tanim komputerku (ale z mocną siecią) grasz z pewnym lagiem na grach z dowolnej platformy.

Bandwidth * Latency = Concurrency

C++ is a superpower

  • C++ Garbage Collection

  • Memory Model

  • Atomics

C++0x working draft

1300 stron. Chyba zejdzie trochę czasu na czytanie :)

Scale up or scale out?

Listening to the questions, from the audience and conversations after, one thing was clear - this is a complex challenge that even smart, experienced people struggle to do well.  When moving toward developing software in a parallel environment, there are a lot of things to consider, a lot of questions come up.

How do I train my developers? 
Can I reuse what I have or do I have to rewrite?
What tools should I use?

Actor Model Concurrency

When it comes to programming models, everyone’s favorite whipping boy is the model where access to shared memory is controlled using locks or their close relatives (e.g. semaphores, condition variables). To be sure, this approach is fraught with peril – race conditions, deadlock, livelock, thundering herds, indefinite postponement, lock or priority inversion, and the list just keeps going. The funny thing is that most of these don’t really go away with any of the programming models that are proposed as solutions (including the actor model). For example, software transactional memory is what all the Cool Kids talk about the most. It’s a good model, with many advantages over lock-based programming, but a program can still deadlock at a higher level even if it’s using STM to avoid deadlock at a lower level. There’s an old saying that a bad programmer can write Fortran in any language, and it’s equally true that a bad programmer can create deadlock in any programming model.

Intel 64 Architecture Memory Ordering

Intel 64 memory ordering guarantees that for each of the following memory-access instructions, the constituent memory operation appears to execute as a single memory access regardless of memory type:

  1. Instructions that read or write a single byte.

  2. Instructions that read or write a word (2 bytes) whose address is aligned on a 2 byte boundary.

  3. Instructions that read or write a doubleword (4 bytes) whose address is aligned on a 4 byte boundary.

  4. Instructions that read or write a quadword (8 bytes) whose address is aligned on an 8  byte boundary.

All locked instructions (the implicitly locked xchg instruction and other read-modify-write  instructions with a lock prefix) are an indivisible and uninterruptible sequence of load(s) followed by store(s) regardless of memory type and alignment.
Other instructions may be implemented with multiple memory accesses. From a memory ordering point of view, there are no guarantees regarding the relative order in which the constituent memory accesses are made. There is also no guarantee that the constituent operations of a store are executed in the same order as the constituent operations of a load

Intel 64 memory ordering obeys the following principles:

  1. Loads are not reordered with other loads.

  2. Stores are not reordered with other stores.

  3. Stores are not reordered with older loads.

  4. Loads may be reordered with older stores to different locations but not with older
    stores to the same location.

  5. In a multiprocessor system, memory ordering obeys causality (memory ordering
    respects transitive visibility).

  6. In a multiprocessor system, stores to the same location have a total order.

  7. In a multiprocessor system, locked instructions have a total order.

  8. Loads and stores are not reordered with locked instructions.

Threads scheduling - priority boost

Each thread has a dynamic priority. This is the priority the scheduler uses to determine which thread to execute. Initially, a thread's dynamic priority is the same as its base priority. The system can boost and lower the dynamic priority, to ensure that it is responsive and that no threads are starved for processor time. The system does not boost the priority of threads with a base priority level between 16 and 31. Only threads with a base priority between 0 and 15 receive dynamic priority boosts.

Another situation causes the system to dynamically boost a thread's priority level. Imagine a priority 4 thread that is ready to run but cannot because a priority 8 thread is constantly schedulable. In this scenario, the priority 4 thread is being starved of CPU time. When the system detects that a thread has been starved of CPU time for about three to four seconds, it dynamically boosts the starving thread's priority to 15 and allows that thread to run for twice its time quantum. When the double time quantum expires, the thread's priority immediately returns to its base priority.

When the user works with windows of a process, that process is said to be the foreground process and all other processes are background processes. Certainly, a user would prefer the process that he or she is using to behave more responsively than the background processes. To improve the responsiveness of the foreground process, Windows tweaks the scheduling algorithm for threads in the foreground process. For Windows 2000, the system gives foreground process threads a larger time quantum than they would usually receive. This tweak is performed only if the foreground process is of the normal priority class. If it is of any other priority class, no tweaking is performed.

Programming Applications for Microsoft Windows, Jeffery Richer
Rozdział 7, Cam Programming Priorities

Windows Thread Scheduling by priorities

The system treats all threads with the same priority as equal. The system assigns time slices in a round-robin fashion to all threads with the highest priority. If none of these threads are ready to run, the system assigns time slices in a round-robin fashion to all threads with the next highest priority. If a higher-priority thread becomes available to run, the system ceases to execute the lower-priority thread (without allowing it to finish using its time slice), and assigns a full time slice to the higher-priority thread. For more information, see Context Switches.

The ReaderWriterGate Lock

What I'd like to do now is explain the idea behind the ReaderWriterGate, discuss a possible implementation, and offer a slightly new way of thinking about threading and thread synchronization in general. It is my hope that you will see places in your existing code where this kind of thinking can be applied so that with minimal re-architecting, you could incorporate some of these ideas to give your applications better performance and scalability.

Performance Profiling Without the Overhead

Reader Writer Lock

Poluję na reader writer lock spełniający następujące wymagania:

  1. C++/Win32/32bit

  2. Readers reentrancy.
    Jeśli dany wątek ma reader-lock w metodzie i woła inną metodę w tym samym wątku to ta metoda może wząść reader-lock ponownie (innymi słowy czytelnik może wziąść sekcję czytania wielokrotnie).

  3. Writer reentrancy.
    Jeśli dany wątek ma writer-lock w metodzie i woła inną metodę w tym samym wątku to ta metoda może wząść writer-lock ponownie (innymi słowy czytelnik może wziąść sekcję czytania wielokrotnie).

  4. Hierachy WR.
    Jeśli dany wątek ma writer-lock to może wziąść reader-lock.

  5. Hierachy RW.
    Jeśli dany wątek ma reader-lock to może wziąść writer-lock (chyba marzenie, nie wiem czy to do osiągnięcia bez dead-lock'ów).

  6. Weakening of writer-lock.
    Jesli wątek ma writer-lock to może go osłabić i zamienić na reader-lock (nie wymagane, mile widziane).

  7. Preferences.
    Nie daje preferencji ani pisarzom ani czytelnikom (jeśli już trzeba to niech preferuje pisarzy).

  8. TryEnter
    Metody w stylu try-enter mile widziane zarówno dla czytelnika jak i dla pisarza.

  9. Timeouts
    Timeouty do czekania mile widziane zarówno dla czytelnika jak i dla pisarza.

  10. Fairness
    "fairness", sprawiedliwość nie jest wymagana choć jeśli by była to bym nie płakał

  11. Starvation
    Zagłodzenie. Nie mile widziane choć akceptowalne.

Troszkę literatury:

Dodano 2010-06-24:

Don't let a long-running operation take hostages

Low-Lock Techniques - Vance Morrison

The biggest conceptual difference between sequential and multithreaded programs is the way the programmer should view memory. In a sequential program, memory can be thought of as being stable unless the program is actively modifying it. For a multithreaded program, however, it is better to think of all memory as spinning (being changed by other threads) unless the programmer does something explicit to freeze (or stabilize) it.

Sequential consistency is an intuitive model and, as the previous example shows, some of the concepts of sequential programs can be applied to it. It is also the model that gets implemented naturally on single-processor machines so, frankly, it is the only memory model most programmers have practical experience with. Unfortunately, for a true multiprocessor machine, this model is too restrictive to be implemented efficiently by memory hardware, and no commercial multiprocessor machines conform to it.

 Jestem w połowie czytania, strasznie długo to trwa, ale naprawdę warto.

What Every Dev Must Know About Multithreaded Apps

Naprawdę wyjątkowo dobrze napisany artykuł. Jeśli nawet nie masz czasu przeczytać go dziś to koniecznie zbookmarkuj na jakiś deszczowy dzień.

A simple condition variable primitive

Almost user mode - unfair mutex - Optex

Build a Richer Thread Synchronization Lock

Reader/Writer Locks and the ResourceLock Library

System.Collections.Concurrent Namespace - .NET 4.0

Bardzo ciekawe klasy. Zwłaszcza iteratory. Iterator dający możliwość iteracji w sposób wątkoodporny brzmi ciekawie.

I blog jednego z ludzi zajmujących się jak podejrzewam implementacją tych iteratorów.

Enumerating Concurrent Collections
Zwłaszcza to wylistowanie jest fajne:

  • Deleted items will always be seen

  • Deleted items will never be seen

  • Added items will always be seen if added at the end of the collection

  • Added items will always be seen if added wherever they are added

  • Added items will always never be seen

  • Moved items will never be seen twice

  • Moved items will be seen twice, if moved to the end of the collection

  • Moved items will always be seen, even if moved to the beginning of the collection

  • No more than N items will be seen, where N is the original length of the collection



ABA problem


Single Reader, Single Writer Fixed Sized Lookaside Dequeu

Rules for use:

  • Push() can only be called from one thread, but it doesn't need to be the same thread as Pop().

  • Pop() can only be called from one thread, but it doesn't need to be the same thread as Push().

  • max_count must be a power of two

  • Don't Pop() more than Count()

  • Check IsFull() or Count() before a Push()

Próby leczenia rynku derywatyw USA

Ciekawy wpis na blogu Piotra Kuczyńskiego. Polecam.

C++ 0x - Jednolita składnia dla funkcji

Rvalue References: C++0x Features in VC10

Ogromnie polecam. Dawno nie czytałem z takimi wypiekami ;)
Na razie dotarłem do połowy, ale polecam poświęcić czas.

Opinie ekonomiczne - Krajowy rynek finansowy w czerwcu

Referencja do r-wartości

Stopy w dół... i co dalej?

Paradigm shift: Design Considerations For Parallel Programming

Concurrent programming is notoriously difficult, even for experts. When logically independent requests share various resources (dictionaries, buffer pools, database connections, and so forth), the programmer must orchestrate the sharing, introducing new problems. These problems—data races, deadlocks, livelocks, and so forth—generally derive from a variety of uncertainties that arise when concurrent tasks attempt to manipulate the same data objects in a program. These problems make the basic software development tasks of testing and debugging extremely difficult [...]

The combination of extra concepts, new failure modes, and testing difficulty should give every developer pause. Is this something you really want to bite off? Clearly, the answer is no! However, many will be forced into this swamp in order to deliver the necessary performance. Microsoft is actively developing solutions to some of the core problems, but high-productivity solution stacks are not yet available.

Current multicore chip architectures are able to increase the number of cores faster than memory bandwidth, so for most problems where the data set does not fit in memory, using the memory hierarchy is an important concern. This imbalance gives rise to a style of programming called stream processing where the focus is to stage blocks of data into the on-chip cache (or perhaps private memory) and then perform as many operations against that data as possible before displacing it with the next block. Those operations may be internally parallel to use the multiple cores or they may be pipelined in a data flow style, but the key issue is to do as much work on the data in the cache while it is there.

Concurrency and exceptions

Budżet w czasach kryzysu,budzet;w;czasach;kryzysu,29334.html

ATI Stream Software Development Kit

C++, the volatile / memory barrier

The C and C++ standards do not address multiple threads (or multiple processors), and as such, the usefulness of volatile depends on the compiler and hardware. Although volatile guarantees that the reads and writes will happen in the exact order specified in the source code, the compiler may generate code which reorders a volatile read or write with non-volatile reads or writes, thus limiting its usefulness as an inter-thread flag or mutex. Moreover, you are not guaranteed that volatile reads and writes will be seen in the same order by other processors due to caching, meaning volatile variables may not even work as inter-thread flags or mutexes.

Some languages and compilers may provide sufficient facilities to implement functions which address both the compiler reordering and machine reordering issues. In Java version 1.5 (also known as version 5), the volatile keyword is now guaranteed to prevent certain hardware and compiler re-orderings, as part of the new Java Memory Model. The proposed C++ memory model does not use volatile, instead C++0x will include special atomic types and operations with semantics similar to those of volatile in the Java Memory Model.

The code you write is not necessarily executed in the order in which the instructions appear in the source.

Optimizing compilers, such as the Microsoft C compiler, sometimes eliminate or reorder read and write instructions if the optimizations do not break the logic of the routine being compiled. In addition, certain hardware architectures sometimes reorder read and write instructions to improve performance. Furthermore, on multiprocessor architectures, the sequence in which read and write operations are executed can appear different from the perspective of different processors.

Most of the time, reordering by the compiler or the hardware is completely invisible and has no effect on results other than generating them more efficiently. However, in a few situations, you must prevent or control reordering. The volatile keyword in C and the Windows synchronization mechanisms can ensure program order of execution in nearly all situations. In some rare instances, the executable code must contain memory barriers to prevent hardware reordering.

Complete information about compiler and hardware reordering and the use of memory barriers is now available in Multiprocessor Considerations for Kernel-Mode Drivers. This information expands on the information previously available in the paper "Memory Barriers in Kernel-Mode Drivers."

If you look at the sample drivers shipped with the Windows DDK, you will see that volatile appears infrequently. In general, volatile is of limited use in driver code for the following reasons:
•    Using volatile prevents optimization only of the volatile variables themselves. It does not prevent optimizations of nonvolatile variables relative to volatile variables. For example, a write to a nonvolatile variable that precedes a read from a volatile variable in the source code might be moved to execute after the read.
•    Using volatile does not prevent the reordering of instructions by the processor hardware.
•    Using volatile correctly is not enough on a multiprocessor system to guarantee that all CPUs see memory accesses in the same order. 
Windows synchronization mechanisms are more useful in preventing all these potential problems.

Anti-convoy locks in Windows Server 2003 SP1 and Windows Vista,guid,e40c2675-43a3-410f-8f85-616ef7b031aa.aspx

debugging custom filters for unhandled exceptions

Rebase DLL

A scalable reader/writer scheme with optimistic retry,guid,45fe761f-2f83-4b87-8f99-aee2a1b40b8c.aspx

WinAPI Kernel Prefixes

Race-free Multithreading

Debug Diagnostic Tool v1.1

STL Breaking Changes in Visual Studio 2010 Beta 1

Intel® Parallel Amplifier

Intel® Parallel Inspector

ASSERT do pliku

    HANDLE hLogFile;
    hLogFile = CreateFile("c:\\log.txt", GENERIC_WRITE,
    _CrtSetReportMode(_CRT_ASSERT, _CRTDBG_MODE_FILE);
    _CrtSetReportFile(_CRT_ASSERT, hLogFile);


CPU cache

False Sharing - Cache misses and cache line contention

Naprawdę ciekawy artykuł. Otwiera oczy na całkiem ciekawe powody problemów z wydajnością na kilku procesorach.

The general case to watch out for is when you have two objects or fields that are frequently accessed (either read or written) by different threads, at least one of the threads is doing writes, and the objects are so close in memory that they're on the same cache line because they are:

  • objects nearby in the same array, as in Example 1 above;

  • fields nearby in the same object, as in Example 4 of [3] where the head and tail pointers into the message queue had to be kept apart;

  • objects allocated close together in time (C++, Java) or by the same thread (C#, Java), as in Example 4 of [3] where the underlying list nodes had to be kept apart to eliminate contention when threads used adjacent or head/tail nodes;

  • static or global objects that the linker decided to lay out close together in memory;

  • objects that become close in memory dynamically, as when during compacting garbage collection two objects can become adjacent in memory because intervening objects became garbage and were collected; or

  • objects that for some other reason accidentally end up close together in memory.

Dlaczego w Polsce nie ma autostrad?

Break Free of Code Deadlocks in Critical Sections Under Windows


Software Optimization Guide for AMD Family 10h Processors

C and C++ Source-Level Optimizations

  • Declarations of Floating-Point Values
  • Using Arrays and Pointers
  • Unrolling Small Loops
  • Arrange Boolean Operands for Quick Expression Evaluation
  • Expression Order in Compound Branch Conditions
  • Long Logical Expressions in If Statements
  • Dynamic Memory Allocation Consideration
  • Unnecessary Store-to-Load Dependencies
  • Matching Store and Load Size
  • Use of Function Prototypes
  • Use of const Type Qualifier
  • Generic Loop Hoisting
  • Local Static Functions
  • Explicit Parallelism in Code
  • Extracting Common Subexpressions
  • Sorting and Padding C and C++ Structures
  • Replacing Integer Division with Multiplication
  • Frequently Dereferenced Pointer Arguments
  • 32-Bit Integral Data Types
  • Sign of Integer Operands
  • Accelerating Floating-Point Division and Square Root
  • Speeding Up Branches Based on Comparisons Between Floats
  • Improving Performance in Linux® Libraries
  • Aligning Matrices

Visual Studio 2010 Beta

Co z inflacją?

LPMud driver - kod kompresji MCCP

Tak odkopałem, może się komuś przyda. Kod kompresji LPMud. Protokół MCCP. Zmiany w driverze.
(Compression routines for LPMud driver. Implementation of the MCCP protocol).

Tools - .NET Memory Profiler

Fajny tool do profilowania zużycia pamięci. Polecam.

Parę screenshot'ów

O budżecie na chłodno,o;budzecie;na;chlodno,24307.html

IMF Global Financial Stability Report - April 2009 - jeszcze raz

Międzynarodowy Fundusz Walutowy (MFW) przyznał się do wstydliwej pomyłki, korygując mocno przesadzone dane o zadłużeniu zmagających się z kryzysem krajów Europy Środkowej i Wschodniej – informuje czwartkowy „Financial Times”.

Pomyłka :)

Koniec złota na Liffe

CodeAnalyst 2.93.705 Beta

API hooking revealed

Do przejrzenia:



C++ initializes class members in the order they are declared

Jak wygląda amerykański budżet?,82861,6560906,Jak_wyglada_amerykanski_budzet_.html

Test się skończy, stres – zostanie?

Kolejność synchronizowania według kolejności adresów w pamięci?

Jak wiadomo dead-locki (najczęstsze) powstają w następujacych sytacjach:

  • mamy dwa (conajmniej) obiekty synchronizujace S1, S2

  • wątek T1 zawłaszcza obiekty w kolejności S1, S2

  • wątek T2 zawłaszcza obiekty w kolejności S2, S1

  • oba wątki wykonują się w tym samym czasie

Jakiś czas temu wklejałem link do artykułu gdzie zalecano hierachię (globalną hierarchię) wszystkich sekcji krytycznych tak aby unikać dead-locków.

Dziś znalazłem całkiem ciekawy sposób ustalania globalnej hierachii.

Można brać sekcje w kolejności rosnących fizycznych adresów w pamięci. :) Fajne, czyż nie?


I jeszcze trochę na temach hierarchii:

The most practical rule to avoid deadlock is to make sure that the locks are always acquired in the same order. In our example, it means that either the score or character lock must be acquired first—it doesn't matter which as long as we are consistent. This implies the need for a lock hierarchy—meaning that locks are not only protecting their individual items but are also keeping an order to the items. The score lock protects not only the values of the score, but the character lock as well.


I jeszcze troche na temat poziomów:

While lock leveling works, it does not come without challenges. Dynamic composition of software components can lead to unexpected runtime failures. If a low-level component holds a lock and makes a virtual method call against a user-supplied object, and that user object then attempts to acquire a lock at a higher level, a lock hierarchy-violation exception will be generated. A deadlock won't occur, but a run-time failure will. This is one reason that making virtual method calls while holding a lock is generally considered a bad practice. Although this is still better than running the risk of a deadlock, this is a primary reason databases do not employ this technique: they must enable entirely dynamic composition of user transactions.


I jeszcze trochę:

The easiest way to deal with this is to always lock the mutexes in the same order. This is especially easy if the order can be hard-coded, and some uses naturally lend themselves towards this choice. For example, if the mutexes protect objects with different roles, it is relatively easy to always lock the mutex protecting one set of data before locking the other one. In such a situation, Lock hierarchies can be used to enforce the ordering — with a lock hierarchy, a thread cannot acquire a lock on a mutex with a higher hierarchy level than any mutexes currently locked by that thread.

MSVC, STL, locale and thread-unfriendly object

Aktualnie czytam - More Exceptional C++ Herb Sutter

try/catch in constructor

Morals About Safe Coding

Moral #4: Always perform unmanaged resource acquisition in the constructor body, never in initializer lists. In other words, either use "resource acquisition is initialization" (thereby avoiding unmanaged resources entirely) or else perform the resource acquisition in the constructor body.

Moral #5: Always clean up unmanaged resource acquisition in local try block handlers within the constructor or destructor body, never in constructor or destructor function try block handlers.

Moral #8: Prefer using "resource acquisition is initialization" to manage resources. Really, really, really. It will save you more headaches than you can probably imagine, including hard-to-see ones, similar to some we've already dissected.

More Exceptional C++, Herb Sutter


IMF Global Financial Stability Report - April 2009

Strasznie dużo czytania. Samo przejrzenie obrazków zajeło mi ponad godzinę. :)

typename keyword versus class keyword

Can you explain the purpose of the typename keyword in C++? When should I use it instead of <class T>? Is there some difference between the two?

C++ FAQ Lite - Exceptions and error handling

Exception safe code in real world

Below you can find three methods of writing exception safe code in real world. In the order prefered by author:

- Petru's ScopeGuard
- try / catch mess

Create a Language Compiler for the .NET Framework

Co oznacza powrót Amerykanów do oszczędzania?


Baltic Dry Index

On the world’s largest VLCC trading route, from the Middle East to Japan, rates have fallen to under $8,000 per day, just over 10% of the $70,000 per day seen at the beginning of 2009.   “There’s no reason why the market should be where it is,” said Frontline acting chief executive Jens Martin Jensen.   “It doesn’t make any sense. If [a crude oil trader] pays $50 a barrel and you have 2m barrels on board [a tanker] worth $100m, who cares if you are paying $10,000 or $20,000 per day? It doesn’t make any difference. It’s weakness in certain owners minds.

Perspektywy amerykańskiego dolara

Bambrick's 8th Rule of Code Reuse

It's far easier and much less trouble to find and use a bug-ridden, poorly implemented snippet of code written by a 13 year old blogger on the other side of the world, than it is to find and use the equivalent piece of code, written by your team leader on the other side of a cubicle partition.

Use Thread Pools Correctly: Keep Tasks Short and Nonblocking

How to use your PC and Webcam as a motion-detecting and recording security camera

Lock Convoys

When the arrival rate at a lock is consistently high compared to its lock acquisition rate, a lock convoy may result. In the extreme, there are more threads waiting at a lock than can be serviced, leading to a catastrophe. This is more common on server-side programs where certain locks protecting data structures needed by most clients can get unusually hot.

Critical sections as implemented in Microsoft Windows operating systems provide a good example of how lock convoys can occur. In Windows, critical sections use a combination of a spinlock and a kernel synchronization object called an "event" to ensure mutual exclusion. For low-contention critical sections, the spinlock will provide mutual exclusion most of the time, falling back on the event only when a thread fails to acquire the spinlock within a certain amount of time. When contention is high, however, it is possible for many threads to fail to acquire the spinlock and enter a waiting state, all waiting on the same event.

.NET Matters: False Sharing

As one example, the code in Figure 4 shows a version of this code that doesn't suffer from the same problem. Instead, it allocates a bunch of extra Random instances between those that we care about, thus ensuring that the ones we do care about are far enough apart so as to not be subject to false sharing (at least on this machine). For our tests, this updated version produced significantly better results, running up to six times faster than the code from Figure 3 on our dual-core test machine.


__try/__except - jak złapać EXCEPTION_ACCESS_VIOLATION

int filter(unsigned int code, struct _EXCEPTION_POINTERS *ep)

void test()
*((char *)(NULL)) = 2;
__except(filter(GetExceptionCode(), GetExceptionInformation()))
cout << "wicked :)";

Unhandled exceptions - _set_se_translator and /EHa




Structured Exception Handling (C++)


Unhandled C++ Exceptions - set_terminate - Understanding C++ Exception Handling

Exceptions and return values

Unit Testing Guidelines

string forty = interpret_cast<string>(40);

Valued Conversions

I drive a car the way most people use a computer

Jak wygląda bilion dolarów?

Dane o sytuacji finansowej przedsiębiorstw (IV kwartał)

1000 mld $

Tyle zamierza wydrukować FED. Dla porównania PKB USA 2008 to 14 000 mld. :)

What's In The Box?

Use the Boost, Luke

"While the standard auto_ptr provides a safer alternative to raw pointers, it has its limitations and some surprising behavior. The Guru helps out by giving the narrator a boost - library, that is. The Boost library has five smart pointers that provide a rich array of useful behavior."

Globalna recesja staje się faktem

C++ bez cholesterolu

Bardzo ciekawy wykład o C++. Zapoznałem się głównie z zaawansowanymi zagdadnieniami, więc nie wiem jak wygladają pierwsze rozdziały. Zwłaszcza interesujące są techniczne informacje temat rzeczywistej implementacji C++. Zacząłem od artykułu o wyjątkach, ale wciągnęły mnie również inne artykuły.

ASSERT bez MessageBox'a

Zawsze zapominam. Powinno być tak:


Licznik kredytów na nieruchomości,2.html?f=17007&w=91792183

Aggregate Sovereign Credit Risk

UBS w obronie regionu CEE i Polski

Trochę o sytuacji Polski na tle gospodarek regionu (i państw "wschodzących").

Inside the Meltdown

The Crash Course

Szczegółnie polecam

Soros a PLN

Idealny sztorm nad Europą Środkową

An Analysis of the Skype Peer-to-Peer Internet Telephony Protocol

Prognozy na rok 2009 - Saxo bank


"W 2008 roku frank szwajcarski zachowywał się jak Dr Jekyll i Mr Hyde, w miarę tego, jak kryzys kredytowy rozlewał się na kolejne rynki. Po pierwsze, frank umacniał się wyraźnie wraz ze zdecydowanym kurczeniem się apetytu na ryzyko, co prawie zawsze miało miejsce w przeszłości. Zakładano, że CHF był masowo wykorzystywany jako waluta finansowania
w transakcjach typu carry trade, a wszystko, co związane z tym rynkiem może ulec odwróceniu. Jednakże po tym, jak frank zanotował rekordowe zwyżki w stosunku do euro, zaczął się szybko osłabiać w stosunku do tej waluty i w pod koniec 2008 roku gracze oraz makroinwestorzy zastanawiali się, jak bardzo ta waluta jest narażona na ryzyko płynące z sektora finansowego. Choć frank zyskiwał na popularności, szczególnie w przypadku kredytów hipotecznych w Europe Środkowej i Wschodniej, by kupić franki szwajcarskie jako walutę finansującą aktywa zagraniczne,
banki szwajcarskie oraz inwestorzy inwestowali także za granicą, i to kwoty przekraczające kilkukrotność PKB kraju, tak więc zobowiązania zagraniczne stanowią ogromne zagrożenie. Dalsze pęknięcia pojawiły się kiedy władze amerykańskie oraz niemieckie zaczęły na poważnie podnosić kwestie związane z rajami podatkowymi − może to nadwyrężyć zdolności Szwajcarii do dalszego przyciągania kapitału. Na początku roku 2009 mamy neutralne stanowisko
w odniesieniu do CHF. Jednakże w Nowym Roku frank może radzić sobie lepiej niż euro."

Koszyk pozycji krótkich walut Europy Środkowej i Wschodniej (CEE) vs. EUR
Trzy największe gospodarki Europy Środkowej były ulubienicami wśród rynków wschodzących w czasach wzrostu globalnej bańki kredytowej. Gospodarki te napędzał ogromy napływ kapitału: mieliśmy do czynienia z boomem inwestycji w aktywa trwałe oraz ekspansją kredytową, która przyniosła znaczący zwrot wartości udzielanych kredytów na wszystkich poziomach. Wszystkie trzy kraje mają ogromny deficyt budżetowy z powodu słabych warunków finansowania. W sektorze prywatnym udzielono wielu kredytów hipotecznych w walutach obcych, takich jak np. frank szwajcarski, gdyż wydawało się to najlepszym rozwiązaniem − większość najważniejszych walut spadała w stosunku do walut lokalnych, a stopy procentowe były niższe za granicą. Teraz, kiedy skutki bańki w sektorze nieruchomości rozprzestrzeniają się w zastraszającym tempie, kraje te będą zmuszone ponownie przywrócić równowagę. Dług zagraniczny niebezpiecznie rośnie, gdyż przepływ kapitału gwałtownie dewaluuje
te waluty. Źródło kredytowania też wyschło, zarówno dla posiadaczy kredytów hipotecznych w walutach obcych, którzy narażeni są na rosnące ryzyko niedotrzymania zobowiązań, a także dla samych państw, które będą musiały szybko reagować. W roku 2009 wszystkie trzy waluty gospodarek Europy Środkowej mogą gwałtowanie spaść w stosunku do EUR.

Nie mamy wątpliwości: w 2009 r. nastąpią spadki stóp, w niektórych krajach znaczne. Kilka krajów, jak np. Wielka Brytania, otwarcie mówi o przyjęciu Polityki Zerowych Stóp, tak jak zrobiła to Japonia po pęknięciu bańki w 1990 r. EBC oraz Narodowy Bank Szwajcarii także będą zmuszone do większej obniżki stóp, gdyż Europa Wschodnia dozna zapaści na skutek niemożności refinansowania zaciągniętych kredytów.
Jednak w chwili obecnej najistotniejsze pytanie brzmi: Czy będziemy mieli deflację czy inflację? Aby na nie odpowiedzieć, trzeba zrozumieć nasz system walutowy. W przeszłości, kiedy obowiązywał standard złota oraz ograniczenia
wzrostu podaży pieniądza, dotkliwej deflacji można było się spodziewać w momencie, gdy kapitał ulegał zniszczeniu oraz następował gwałtowny wzrost realnej ceny zadłużenia. Jednak obecnie brak jest jakichkolwiek ograniczeń wzrostu podaży pieniądza. W Stanach Zjednoczonych podaż pieniądza M1 już teraz gwałtownie wzrosła o niebotyczne 10% po tym, jak przez 4 lata pozostawała niemalże na tym samym poziomie. Na całym świecie banki centralne podejmą te same działania: drukują i wydają pieniądze. Jesteśmy przekonani, że rok 2009 będzie rokiem deflacji, ale gwałtownie rosnąca podaż pieniądza nie dopuści do spadku cen przez długi czas i doprowadzi do inflacji przed 2010 rokiem.

Będzie to jednak scenariusz dla bardziej rozwiniętych krajów, posiadających bezpieczne prawa własnościowe oraz status oaz spokoju. Pozostałe kraje rynków wschodzących lub z niezrównoważonymi deficytami na rachunku obrotów bieżących nie będą w stanie w tej sytuacji obniżyć stóp procentowych. W rzeczywistości kraje rynków wschodzących mogą być zmuszone do podniesienia stóp, by przyciągnąć kapitał lub by zwyczajnie zapobiec dużym odpływom pieniędzy. Innymi słowy, mimo premii za ryzyko, osiągających już teraz rekordowe wartości, ulegną one nawet większemu rozszerzeniu w relacji do stanu dzisiejszego. Spready między korporacyjnymi i rządowymi papierami wartościowymi o stałym dochodzie, między rynkami wschodzącymi i krajami G10, między stałym dochodem o długiej i krótkiej dacie zapadalności, między ratingiem AAA a wysokim ryzykiem będą ciągle rosły.

Polecam... Więcej ciekawych analiz tutaj:

Fed Reserve Fails to Reflate the US Banking System

CHESS: An Automated Concurrency Testing Tool

Inside Windows 7 - Service Controller and Background Processing

Mark Russinovich: Inside Windows 7

Bad exception-based code and not-bad exception-based code

Smart Pointers in C++

Using auto_ptr Effectively

Przepis na katastrofę

Diamond problem

Co tak naprawdę dzieje się ze złotym?

Pod paroma pomysłami się podpisuje


Będzie skromniej i wydajniej,76842,6196748,Bedzie_skromniej_i_wydajniej.html

10 lat za Azjatami

Soto del Henares

Startup, Shutdown and related matters

Chiny hamują na potęgę

Velocity of money

Całkiem ciekawa dyskusja. Polecam jeśli komuś się nudzi:,2.html?f=17007&w=89868671&a=89868671

The Next Step in the Spam Control War: Greylisting by Evan Harris

Ostatnio maile wysyłane przez mój serwer dostają łupnia od poniższego mechanizmu:

Za wikipedią:

Greylisting (lub graylisting) to metoda ochrony kont poczty elektronicznej przed spamem. Serwer poczty, który używa metody greylistingu, odrzuca maile od nierozpoznanych nadawców. Jeśli taki mail został nadany ze stałego serwera poczty, to serwer ten po kilku godzinach ponowi próbę wysłania, którą akceptuje serwer odbiorcy. Jeśli poczta pochodzi z serwera rozsyłającego spam, na ogół nie jest wysyłana ponownie.

A tutaj whitepaper autora pomysłu.

Exception handling - Resource Acquisition Is Initialization

Resource Acquisition Is Initialization, w skrócie RAII - popularny wzorzec projektowy w C++ i D. Technika łączy przejęcie i zwolnienie zasobu z inicjalizacją i deinicjalizacją zmiennych.

Przejęcie zasobu jest połączone z konstrukcją, a zwolnienie z automatyczną destrukcją zmiennej. Ponieważ wywołanie destruktora jest automatyczne gdy zmienna wyjdzie poza swój zasięg, jest zagwarantowane, że zasób zostanie zwolniony od razu gdy skończy się czas życia zmiennej. Jest to także prawdą przy wystąpieniu wyjątku. RAII jest kluczową koncepcją przy pisaniu kodu odpornego na wyjątki.

Technika RAII jest używana na przykład przy zakładaniu blokad wątków albo obsłudze plików.

Własność pamięci przydzielanej dynamicznie (za pomocą new) może być również kontrolowana za pomocą RAII. Do tego celu biblioteka standardowa C++ definiuje auto ptr. Czas życia dzielonych obiektów może być zarządzany przez smart pointer z semantyką dzielonej własności taki jak boost::shared_ptr definiowany przez bibliotekę Boost i oznaczony do włączenia do nowego standardu C++0x, lub Loki::SmartPtr z biblioteki Loki.

The approach of gaining exception safety through ordering and the ‘‘resource acquisition is
initialization’’ technique (§14.4) tends to be more elegant and more efficient than explicitly handling
errors using try-blocks. More problems with exception safety arise from a programmer ordering
code in unfortunate ways than from lack of specific exception-handling code. The basic rule of
ordering is not to destroy information before its replacement has been constructed and can be
assigned without the possibility of an exception.


From (E.6)

How can I define my types so that they don’t cause undefined behavior or leak resources?
The basic rules are:
[1] When updating an object, don’t destroy its old representation before a new representation
is completely constructed
and can replace the old one without risk of exceptions.
For example, see the implementations of v e c t o r :: o p e r a t o r =(), s a f e _ a s s i g n (), and
v e c t o r :: p u s h _ b a c k () in §E.3.
[2] Before throwing an exception, release every resource acquired that is not owned by
some (other) object.
[2a] The ‘‘resource acquisition is initialization’’ technique (§14.4) and the language rule
that partially constructed objects are destroyed to the extent that they were constructed
(§14.4.1) can be most helpful here. For example, see l e a k () in §E.2.
[2b] The u n i n i t i a l i z e d _ c o p y () algorithm and its cousins provide automatic release of
resources in case of failure to complete construction of a set of objects (§E.4.4).
[3] Before throwing an exception, make sure that every operand is in a valid state. That is,
leave each object in a state that allows it to be accessed and destroyed without causing
undefined behavior or an exception to be thrown from a destructor. For example, see
v e c t o r ’s assignment in §E.3.2.
[3a] Note that constructors are special in that when an exception is thrown from a constructor,
no object is left behind to be destroyed later. This implies that we don’t
have to establish an invariant and that we must be sure to release all resources
acquired during a failed construction before throwing an exception.
[3b] Note that destructors are special in that an exception thrown from a destructor
almost certainly leads to violation of invariants and/or calls to t e r m i n a t e ().

In practice, it can be surprisingly difficult to follow these rules. The primary reason is that
exceptions can be thrown from places where people don’t expect them. A good example is
s t d :: b a d _ a l l o c . Every function that directly or indirectly uses n e w or an a l l o c a t o r to acquire
memory can throw b a d _ a l l o c . In some programs, we can solve this particular problem by not
running out of memory. However, for programs that are meant to run for a long time or to
accept arbitrary amounts of input, we must expect to handle various failures to acquire
resources. Thus, we must assume every function capable of throwing an exception until we
have proved otherwise.

Lock object sharing with hashes

Terminator: Salvation

Hibernate once, resume many

Złotówka 2009 (blog Janusza Jankowiaka),dlaczego;zloty;jest;niepopularny,18412.html

Schemat Ponziego,85811,6071136,Miliony_ze_znaczkow__ktorych_nie_bylo__Jak_narodzil.html


Tomasz Kulig