Little-known free kill based-on wrong free kill thought-vulnerability warning-the black bar safety net

ID MYHACK58:62200821371
Type myhack58
Reporter 佚名
Modified 2008-12-07T00:00:00


Author:A1Pass (reprint please indicate the copyright

As afree to killart lovers, own thefree to killthe learning process found some strange phenomenon, but when I in-depth study after they found another piece of heaven and earth.

For example, the implicit feature code actually exists or not? And why sometimes every time feature code positioning results are not the same? What causes these situations to happen? Or now circulating online explanation really correct?

If you want in-depth study of these, we must choose a direction to uncover the puzzle, for example, should from antivirus software and proceeded to study it, or should be from the feature code locator principle to proceed with research? Or both to care for?

Here we have made this problem start in to explore the industry within the error offree to killthought, at the same time summed up the we should where to start is reasonable. Today I will lead the readers from both positive and negative aspects of the questfree killwhich encountered some strange things.

A, n side: to Antivirus for basic research

The positive side of the argument is based on the antivirus as a basis to study the issue more appropriate.

Speaking of antivirus software based, some friends may feel bothered in shares, but it is this seemingly simple antivirus principle, but contains a mystery of the key. We know that an antivirus software complexity to be far more than viruses and Trojans, of course, the principle is also more complicated. The General case, a antivirus software by the scanner, the virus database and the virtual machine components, the scanner is antivirus software of the core, for the discovery of the virus, but Antivirus is not only the presence of which a scanner is! Most software are several scanner or scanning algorithms of the combination, although the antivirus the basic idea is pattern matching, but unfortunately, in fact, now most of the antivirus software have their own unique scanner. Signature scanning technology is just the first generation of scanning technology to the mainstream, and now have long been ushered in the second generation scanning technology popularity.

We take the most typical of Kaspersky to say, in its body actually has been very rare to the feature code to match the shadow, now Kaspersky the application of the mainstream scan technology is a technique called“cryptographic checksum”of the unique scanning algorithm, which is different from our usual understanding of parity and technology, his idea is that through a certain file characteristics to determine to calculate an offset region of the checksum, resulting in two values, these two values can be said to be the end of so-called feature code. And by experiment, not difficult to find this checksum algorithm to the case interchanged, rows and rows interchanged between the performance is not sensitive, this directly led to ourfree killfacing greater challenges.

This technique as early as 6. 0 Edition it has been card bar enabled, it led to a number offree to killamong the strange phenomenon, some industry friends mostly hidden feature codes, etc. vocabulary to barely explain this strange phenomenon. The following is my reasoning, brother this is one of the words, such as have the wrong place also hope you get to correct me.

First, the special scanning method must be accompanied by a special feature code, Kaspersky password checksum location out of the“feature code”volume is usually relatively large, since it has anti-interference properties, it will lead to some simple modifications will not work and will cause each pattern to the positioning of a big difference, and so on phenomenon occurs. In order to check it and the value of the effective interference, it is more often modified several seemingly unrelated places but gotfree killeffect, which results in“implicit signatures”of the conclusions.

In fact, if we reverse thinking, you will find the“hidden signatures”is an unlikely to achieve the technical, this feature code if it is implicit, then the antivirus detects it exists, is a packet of poison or not reported to poison it? If the message is poisonous, that it can not be called“implicit signatures”. If not reported to poison, then necessarily you need to have a mechanism to trigger it, if that's the case, that there is another explanation-interference code. If you find the original application with signatures present where if full is 0 0 coverage to the anti-positioning, then the antivirus software then it will activate the interference code, a false signature to lead us in thefree to killthe last step fails.

So to say“implied signature”should be a wrong idea, the real cause of this phenomenon is what we know or don't know the scanning algorithm leads, for example, Kaspersky password verification and scanning algorithm.

Thus, the positive party feel if you want to studyfree killtechnology, we should first understand the antivirus software works. As described above, only one scanning algorithms of the problem, but reveals some of our previous misconceptions, and therefore the positive side we believe that through the antivirus principles to the study offree to killsome of the problems is wise.

Second, the anti-party: feature code locator thing to think

Antivirus isfree to killopponent or enemy, and this enemy is relatively strong. Therefore, the anti-side believe that antivirus software is not just afree to killenthusiasts will have the ability to research. So we should learn from yourself in the hands of“gun” - feature code locator up to study the things of nature.

Here I'll take home more commonly used MyCcl as an example to explain. First let us look at the following two questions:

  1. Why is the feature Code of the positioning results will not the same? In addition to the antivirus software related, are there other reasons?

2, why the same things that a feature? when the positioning accuracy are high and low? Moreover, a difference is relatively large it?

Now we look at a piece of asFigure 1shown in MyCcl, the working principle diagram.

! !

! 001.JPG (349.35 KB)

2008-12-6 1 1:5 6

Now we assume that aFigure 2shows a rather extreme example:

1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1


1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1

This example features code right in the file center when we block the number is set to 2, it will appear the following process:

The first batch, such as3, 4: The

1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1

1111111A00000000 1111111AA1111111

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1

The first generation of the file obscures the feature code, The feature code in the 2 4-4 8-byte.

The second batch, as shown inFigure 5, 6: The

1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1

1111111A00000000 1111111A00000000

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

The first generated file with a second generation of masked feature code, The feature code in the 2 4-4 8-byte.

By the above process shows that the file is filled to half just destroyed the feature code, and when we use this result to carry out the secondary“composite positioning feature code”, will never find the poison file, because this feature code when positioning has been split in two, so that in this case with MYCCL positioning of the highest accuracy is this half the file size.......

After my experiment, found that the situation of the sub-block number, respectively, set to 3, and the composite positioning is still divided into 2, its accuracy will improve to the entire file size of 1/3, in the sub-block number is set to 6, the accuracy changes to the entire file of 1/6th.

Through this example, I believe you readers have seen the block number for the feature code positioning results, we also found that the initial generation of sample file number of the lot, positioned out of the results naturally more accurate, but with the block number gradually increased, it is for the feature code accuracy is also gradually reduced.

Therefore we are not difficult to find, the block number of the set, in some extent, is affecting ourfree killsuccess rate, but perhaps a reader will ask“this is just a special example of it, he can have much persuasive power for?”

First of all the idea is very objective, but if we will be similar to the password verification and scanning technology into account, then it is not difficult to find this is the case of the probability of occurrence is still very large, given a cryptographic checksum generated by the large volume of signatures, so I think this particular example is still very convincing.

And beyond that, we should also think about filling the data selection, most of your friends are using the default of“0 0”, We planed to go to the consequent interference code is not provided, the probability is concerned, a file appears blank area of the size of the General minimum for the entire file volume of 1 0% or so. So to say,“0 0”bytes in the file appear in the probability of at least 1 0%, 1 6-ary in the other 1 of 5 characters, the probability of occurrence of only up to 6. 6 per cent. So the feature code appears in the“0 0”the chance of must also be affected, so if you encounter this situation, and we also use“0 0”way to filling this in itself is“0 0”feature code, necessarilyOn the face of failure......

From the above we can see out, we are through the feature code locator MYCCL of the study, obtained the positioning results are not the same and the pattern positioning accuracy problems. Thus the anti-party that, by the feature code locator to expand the research it is easy to operate and sensible method.

To here, this not counting the fierce debate is officially over, and for the final result I think is not important, important is you the reader through this debate learned something? If the answer is Yes, then my purpose is reached.

For this article's argument, I believe that both positive and negative the views are correct, it is clear we cannot simply from a regard to a comprehensive study on what, the reason why I a1pass“overdoing it”set up this argument, the purpose is to change technical articles along the lines way, maybe you can make everyone look more interesting to some, can also be said that this is a try.

This article through the simple a few ideas to throw out a simple question, and through this from the simple to the complex process, but also allows us to get out of a previously unknown errors. This article raises some of the conclusions, although more bold, but all are after the author carefully the experimental results, since the author level is limited, unavoidably appears questionable, also please at the exhibitions, 我的博客是 the.