Sunday, August 22, 2010

IEEE 802.22 WRAN Standard


The IEEE 802.22 standard defines a system for a Wireless Regional Area Network, WRAN that uses unused or white spaces within the television bands between 54 and 862 MHz, especially within rural areas where usage may be lower.

To achieve its aims, the 802.22 standard utilises cognitive radio technology to ensure that no undue interference is caused to television services using the television bands. In this way 802.22 is the first standard to fully incorporate the concept of cognitive radio.

The IEEE 802.22 WRAN standard is aimed at supporting license-exempt devices on a non-interfering basis in spectrum that is allocated to the TV Broadcast Service. With operating data rates comparable to those offered by many DSL / ADSL services it can provide broadband connectivity using spectrum that is nominally allocated to other services without causing any undue interference. In this way IEEE 802.22 makes effective use of the available spectrum without the need for new allocations.

IEEE 802.22 background

The IEEE 802.22 standard for a Wireless Regional Area Network or WRAN system has been borne out of a number of requirements, and also as a result of a development in many areas of technology.

In recent years there has been a significant proliferation in the number of wireless applications that have been deployed, and along with the more traditional services this has placed a significant amount of pressure on sharing the available spectrum. Coupled to this there is always a delay in re-allocating any spectrum that may come available.

In addition to this the occupancy levels of much of the spectrum that has already been allocated is relatively low. For example in the USA, not all the TV channels are used as it is necessary to allow guard bands between active high power transmitters to prevent mutual interference. Also not all stations are active all of the time. Therefore by organising other services around these constraints it is possible to gain greater spectrum utilisation without causing interference to other users. Despite the fact that the impetus for 802.22 is coming from the USA, the aim for the standard is that it can be used within any regulatory regime.

One particular technology that is key to the deployment of new services that may bring better spectrum utilisation is that of cognitive radios technology. By using this the radios can sense their environment and adapt accordingly. The use of cognitive radio technology is therefore key to the new IEEE 802.22 WRAN standard.

IEEE 802.22 standard history

The concept for 802.22 can trace its origins back to the first ideas for cognitive radio. With the development of technologies for the software defined radio, J Mitola in his doctoral thesis in 2000 coined the name "Cognitive Radio" for a form of radio that would change its performance by detecting its environment and changing accordingly.

In 2004 the FCC issued and NPRM (notice of proposed rulemaking) regarding the television spectrum. As a result in November 2004 the IEEE 802.22 working group was formed to develop a WRAN system that would deliver broadband connectivity particularly to rural areas by sharing the television spectrum.

By May 2006 draft v0.1 of the IEEE 802.22 standard was available, although much work was still required. Also discussions were required with broadcasters whose spectrum was being shared as they were fearful of interference and reduced revenues from advertising as a result.

The standard is expected to be completed by the first quarter of 2010 and with this some of the first networks could be deployed.

802.22 basics

There are a number of elements that were set down for the basis of the 802.22 standard. These include items such as the system topology, system capacity and the projected coverage for the system. By setting these basic system parameters in place, the other areas fall into place.
System topology: The system is intended to be a point to multipoint system, i.e. it has a base station with a number of users or Customer Premises Equipments, CPEs located within a cell. The base station obviously links back to the main network and transmits the data on the downlink to the various users and receivers data from the CPEs in the uplink. It also controls the medium access and addition to these traditional roles for a base station, it also manages the "cognitive radio" aspects of the system. It uses the CPEs to perform a distributed measurement of the signal levels of possible television (or other) signals on the various channels at their individual locations. These measurements are collected and collated and the base station decides whether any actions are to be taken. In this way the IEEE 802.22 standard is one of the first cognitive radio networks that has been defined.
Coverage area: The coverage area for the IEEE 802.22 standard is much greater than many other IEEE 802 standards - 802.11, for example is limited to less than 50 metres in practice. However for 802.22, the specified range for a CPE is 33 km and in some instances base station coverage may extend to 100 km. To achieve the 33 km range, the power level of the CPE is 4 Watts EIRP (effective radiated power relative to an isotropic source).
System capacity: The system has been defined to enable users to achieve a level of performance similar to that of DSL services available. This equates to a downlink or download speed of around 1.5 Mbps at the cell periphery and an uplink or upstream speed of 384 kbps. These figures assume 12 simultaneous users. To attain this the overall system capacity must be 18 Mpbs in the downlink direction.

In order to be able to meet these requirements using a 6 MHz television channel spectral efficiency of around 3 bits / sec / Hz are required to give the required physical layer raw data transfer rate.

Monday, August 16, 2010

A study of knowledge management


In the prevailing uncertain and ever-changing business environment knowledge has become the single certain source for sustainable competitive advantage. Learning from past mistakes and avoiding reinventing the wheel are crucial tasks and no organization can today afford not to look for ways to make the best use of its knowledge. With Siemens Industrial Turbomachinery AB (SIT) being an actor in a complex and high-technology industry managing and leveraging the organization’s knowledge becomes essential. It came to the authors’ attention that the project manager department (GL) within the gas division of SIT experienced a need for improved processes for managing and utilizing the organization’s knowledge-base.

On the first of January 2010 Siemens carried out a major reorganization, which affected SIT and the GL department by merging two previously separate departments of project managers into one unit. With efforts underway to harmonize the two department’s former working methods the situation implies timeliness for conducting a study on how to improve the company’s knowledge management initiative. This master thesis hence evolved to focus on examining and point out the improvement opportunities that exist with regards to knowledge sharing between projects, and between projects and the organization, and how tools and processes should be designed to collect, preserve, disseminate and reuse experiences, knowledge and lessons learned within a project-based organization in the best possible way.

The research approach of the study was of a qualitative character including interviews with the 16 project managers of GL and other key employees both at SIT and at Siemens Oil & Gas division’s new CS and IP business units. Combined with meeting participation and observations of the project managers in their daily operations an increased understanding of the current situation at SIT and GL emerged; an understanding needed to identify the reasons and factors affecting the low degree of retention and utilization of the organization’s knowledge-base; an understanding leading up to the development of a model highlighting the important aspects for successful knowledge management initiatives, and how these aspects correlate.

In order to improve the knowledge utilization a continuous lessons learned gathering throughout the project life-cycle needs to be implemented. This is primarily achieved through collecting lessons learned at the regular project meetings together with special lessons learned workshops. The collection and reutilization of knowledge hence needs to be integrated with the project management process. Improving the different forums available for knowledge sharing is also needed to enable an increased level of transformation of human capital into structural capital; augmenting the organization’s knowledge-base. Providing forums for knowledge sharing together with a visualized management support through actions, feedback and the introduction of a culture aimed at organizational learning further enhance the retention and utilization of the organization’s knowledge-base.

Although the approach of this study is based on a case study of the SIT organization the conclusions are regarded to be of value for other project-based organizations and thus rending the conclusions to be generalized and used within other lines of business. The generic conclusion of this study is that in order to implement a successful knowledge management initiative all factors of the model need to be considered and attended too.

Monday, August 9, 2010

System Implementation


The implementation of the algorithms described in Chapter 3 consists of approximately 7000 lines of C++.
This code is logically divided into components that match the system diagram in Figure 3.1. In this Chapter
we will explain the details of our implementation, focusing on the instrumentation and analysis routines that
make up the core of the system and the corresponding data structures.
4.1 Binary Instrumentation
The implementation of stage 1 of our algorithm is essentially two components that work in tandem to
perform instrumentation and run-time analysis. Using the functionality provided by Pin we instrument a
variety of events, including thread creation, system calls, and instruction execution. The instrumentation
code analyses the events and registers callbacks to the correct run-time processing routines.
4.1.1 Hooking System Calls
All taint analysis algorithms require some method to seed an initial pool of tainted locations. One approach
is to hook system calls known to read data that may be potentially tainted by attacker input, e.g. read.
Another potential approach is to hook specific library calls, but as previously pointed out [14] this could
require one to hook large numbers of library calls instead of a single system call on which they all rely.
To mark memory locations as tainted we hook the relevant system calls and extract their destination
locations. Pin allows us to register functions to be called immediately before a system call is executed
(PIN AddSyscallEntryFunction) and after it returns (PIN AddSyscallExitFunction). We use
this functionality to hook read, recv and recvfrom. When a system call is detected we extract the
destination bu er of the function using PIN GetSyscallArgument and store the location. This provides
us with the start address for a sequence of tainted memory locations.
When a system call returns we extract its return value using Pin GetSyscallReturn. For the system
calls we hook a return value greater than 0 means the call succeeded and data was read in. When the return
value is greater than 0 it also indicates exactly how many contiguous bytes from the start address we should
consider to be tainted. On a successful system call we first store the data read in, the destination memory
location and the file or socket it came from in a DataSource object. The DataSource class is a class
we created to allow us to keep track of any input data so that it can be recreated later when building the
exploit. It also allows us to determine what input source must be used in order to deliver an exploit to the
target program. Once the DataSource object has been stored we mark the range of the destination bu er
as tainted.
Once a location has been marked as tainted the instruction level instrumentation code can propagate the
taint information through the programs memory and registers.
45
4.1.2 Hooking Thread Creation and Signals
As well as system calls we insert hooks on thread creation and on signals received from the OS. In multithreaded
applications it is necessary for us to determine when threads are created and destroyed and to
identify the currently active thread when calling our analysis routines. Threads do not share registers so
a register that is tainted by one thread should not be marked as tainted for any others. When a thread is
created we instantiate a new object in our taint analysis engine that represents the taint state of its registers.
This object is deleted when the thread is destroyed.
As mentioned in Chapter 3, one of the mechanisms one could potentially use to detect a possible vulnerability
is by analysing any signals sent to the program. Using the function PIN AddContextChangeFunction
we can register a routine to intercept such signals. If the signal is one of SIGKILL, SIGABRT or SIGSEGV
we pause the program and attempt to generate an exploit. We eventually decided not to use this mechanism
for vulnerability detection as it introduced complications when attempting to determine the exact cause of
the signal and hence the vulnerability.
4.1.3 Hooking Instructions for Taint Analysis
In Chapter 3 all of the binary instrumentation is performed by algorithm 3.1. In this section we will elaborate
on the methods by which this instrumentation takes place.
Our taint analysis engine provides a low level API through the TaintManager class. This class provides
methods for directly marking memory regions and registers as tainted or untainted. To reflect the
taint semantics of each x86 instruction at run-time we created another class titled x86Simulator. This
class interacts directly with the TaintManager class and provides a higher level API to the rest of our
analysis client. For each x86 instruction X the x86Simulator contains functions with names beginning
with simulateX e.g. simulateMOV corresponds to the mov instruction. Each of these functions takes
arguments specifying the operands of the x86 instruction and computes the set of tainted locations resulting
from the instruction and these operands.
For each instruction taint analysis is performed by inserting a callback into the instruction stream to the
correct simulate function and provide it with the instructions operands. As Pin does not utilise an IR this
requires us to do some extra processing on each instruction in order to determine the required simulation
function and extract the instructions operands.
The x86Simulator class provides a mechanism for taint analysis but to use it we must have a method of
analysing individual x86 instruction. Pin allows one to register a function to hook every executed instruction
via INS AddInstrumentFunction. We use this function to filter out those instructions we wish to process.
For every instruction executed we first determine exactly what instruction it is so we can model its taint
semantics. This process is made easier as Pin filters each instruction into one or more categories, e.g. the
movsb instruction belongs to the XED CATEGORY STRINGOP category. It also assigns each instruction a
unique type, e.g. XED ICLASS MOVSB for the movsb instruction. An example of the code that performs
this filtering is shown in Listing 4.1.
This code allows us to determine the type of instruction being executed. The code to process the actual
instruction and insert the required callback is encapsulated in the processX86.processX functions.
Inserting Taint Analysis Callbacks
When hooking an instruction the goal is to determine the correct x86Simulator function to register a
callback to so that at run-time we can model the taint semantics of the instruction correctly. The code in
Listing 4.1 allows us to determine the instruction being executed but each instruction can have di erent
taint semantics depending on the types of its operands. For example, the x86 mov instruction can occur
in a number of di erent forms with the destination and source operands potentially being one of several
combinations of memory locations, registers and constants. In order to model the taint semantics of the
instruction we must also know the type of each operand as well as the type of the instruction. Listing 4.2
demonstrates the use of the Pin API to extract the required operand information for the mov instruction.
The code shown is part of the processX86.processMOV function.
46
Listing 4.1: “Filtering x86 instructions”
1 UINT32 cat = INS_Category(ins);
2
3 switch (cat) {
4 case XED_CATEGORY_STRINGOP:
5 switch (INS_Opcode(ins)) {
6 case XED_ICLASS_MOVSB:
7 case XED_ICLASS_MOVSW:
8 case XED_ICLASS_MOVSD:
9 processX86.processREP_MOV(ins);
10 break;
11 case XED_ICLASS_STOSB:
12 case XED_ICLASS_STOSD:
13 case XED_ICLASS_STOSW:
14 processX86.processSTO(ins);
15 break;
16 default:
17 insHandled = false;
18 break;
19 }
20 break;
21
22 case XED_CATEGORY_DATAXFER:
23
24 ...
Listing 4.2: “Determining the operand types for a mov instruction”
1 if (INS_IsMemoryWrite(ins)) {
2 writesM = true;
3 } else {
4 writesR = true;
5 }
6
7 if (INS_IsMemoryRead(ins)) {
8 readsM = true;
9 } else if (INS_OperandIsImmediate(ins, 1)) {
10 sourceIsImmed = true;
11 } else {
12 readsR = true;
13 }
Listing 4.3: “Inserting the analysis routine callbacks for a mov instruction”
1 if (writesM) {
2 INS_InsertCall(ins, IPOINT_BEFORE, AFUNPTR(&x86Simulator::simMov_RM),
3 IARG_MEMORYWRITE_EA,
4 IARG_MEMORYWRITE_SIZE,
5 IARG_UINT32, INS_RegR(ins, INS_MaxNumRRegs(ins)-1),
6 IARG_INST_PTR,
7 IARG_END);
8 } else if (writesR) {
9 if (readsM)
10 INS_InsertCall(ins, IPOINT_BEFORE, AFUNPTR(&x86Simulator::simMov_MR), ..., IARG_END);
11 else
12 INS_InsertCall(ins, IPOINT_BEFORE, AFUNPTR(&x86Simulator::simMov_RR), ..., IARG_END);
13 }
47
Once the operand types have been extracted we can determine the correct function in x86Simulator
to register as a callback. The x86Simulator class contains a function for every x86 instruction we wish
to analyse and for each instruction it contains one or more variants depending on the possible variations in
its operand types. For example, a mov instruction takes two operands; ignoring constants it can move data
from memory to a register, from a register to a register or from a register to memory. This results in three
functions in x86Simulator to handle the mov instruction - simMov MR, simMov RR and simMov RM.
The code in Listing 4.3 is from the function processX86.processMOV. It uses function INS InsertCall
to insert a callback to the correct analysis routine depending on the types of the mov instructions operands.
Along with the callback function to register, INS InsertCall takes the parameters to pass to this function1.
This process is repeated for any x86 instructions we consider to propagate taint information.
Under-approximating the Set of Tainted Locations
Due to time constraints on our implementation we have not created taint simulation functions for all possible
x86 instructions. In order to avoid false positives it is therefore necessary to have a default action for all
non-simulated instructions. This default action is to untaint all destination operands of the instruction. Pin
provides API calls that allow us to access the destination operands of an instruction without considering its
exact semantics. By untainting these destinations we ensure that all locations that we consider to be tainted
are in fact tainted. We perform a similar process for instructions that modify the EFLAGS register but are
not instrumented.
4.1.4 Hooking Instructions to Detect Potential Vulnerabilities
We detect potential vulnerabilities by checking the arguments to certain instructions. For a direct exploit
we require the value pointed to by the ESP register at a ret instruction to be tainted or the memory location/
register used by a call instruction. We can extract the value of the ESP using the IARG REG VALUE
placeholder provided by Pin and the operands to call instructions can be extracted in the same way as for
the taint analysis callbacks.
For an indirect exploit we must check the destination address of the write instruction is tainted, rather
than the value at that address. As described in [19], an address to an x86 instruction can have a number of
constituent components with the e ective address computed as follows2:
Effective address = Displacement + BaseReg + IndexReg * Scale
In order to exploit a write vulnerability we must control one or more of these components. Pin provides
functions to extract each component of an e ective address. e.g. INS OperandMemoryDisplacement,
INS OperandMemoryIndexReg and so on. For each instruction that writes to memory we insert a callback
to run-time analysis routine that takes these address components as parameters and the value of the write
source.
4.1.5 Hooking Instructions to Gather Conditional Constraints
As described in Chapter 3, to gather constraints from conditional instructions we record the operands
of instructions that modify the EFLAGS register and then generate constraints on these operands when
a conditional jump is encountered. Detecting if an instruction writes to the EFLAGS register is done
by checking if the EFLAGS register is in the list of written registers for the current instruction, e.g. if
1At instrumentation-time it is sometimes not possible to determine the exact operand values an instruction will have at runtime.
To facilitate passing such information to run-time analysis routines Pin provides placeholder values. These placeholders
are replaced by Pin with the corresponding value at run-time. For example, there are placeholders for the address written
by the instruction (IARG MEMORYWRITE EA) and the amount of data written (IARG MEMORYWRITTEN EA). There are a number
of other placeholders defined for retrieving common variables such as the current thread ID, instruction pointer and register
values.
2From the Pin website, http://www.pintool.org
48
Listing 4.4: “Inserting a callback on EFLAGS modification”
1 if (op0Mem && op1Reg) {
2 INS_InsertCall(ins, IPOINT_BEFORE, AFUNPTR(&x86Simulator::updateEflagsInfo_RM),
3 IARG_MEMORYREAD_EA,
4 IARG_MEMORYREAD_SIZE,
5 IARG_UINT32, INS_RegR(ins, INS_MaxNumRRegs(ins)-1),
6 IARG_UINT32, eflagsMask,
7 IARG_CONTEXT,
8 IARG_THREAD_ID,
9 IARG_INST_PTR,
10 IARG_END);
11 }
Listing 4.5: “Inserting callbacks on a conditional jump”
1 VOID
2 processJCC(INS ins, JCCType jccType)
3 {
4 unsigned eflagsMask = extractEflagsMask(ins, true);
5 INS_InsertCall(ins, IPOINT_AFTER, AFUNPTR(&x86Simulator::addJccCondition),
6 IARG_UINT32, eflagsMask,
7 IARG_BOOL, true,
8 IARG_UINT32, jccType,
9 IARG_INST_PTR,
10 IARG_END);
11
12 INS_InsertCall(ins, IPOINT_TAKEN_BRANCH, AFUNPTR(&x86Simulator::addJccCondition),
13 IARG_UINT32, eflagsMask,
14 IARG_BOOL, false,
15 IARG_UINT32, jccType,
16 IARG_INST_PTR,
17 IARG_END);
18 }
INS RegWContain(ins, REG EFLAGS) is true. If an instruction does write to the EFLAGS register we
can extract from it a bitmask describing those flags written.
Using the same INS Is* functions as shown in Listing 4.2 we determine the types of each operand.
Once again this is necessary as we use a di erent simulation function for each combination of operand types,
where an operand type can be a memory location, register or constant. Once the operand types have been
discovered we register a callback to the correct run-time routine, passing it the instruction operands and a
bitmask describing the bits changed in the EFLAGS register. Listing 4.4 exemplifies how the callback is
registered for a two operand instruction where the first operand is a memory location and the second is a
register.
On lines 3 and 4 the Pin placeholders to extract the memory location used and its size are used. The
register ID is extracted on line 5 and passed as a 32-bit integer. Similarly the bitmask describing the EFLAGS
modified is passed as a 32-bit integer on line 6.
Inserting Callbacks to Record Conditions from Conditional Jumps
The above code is used to keep track of the operands on which conditional jumps depend on. To then
convert this information to a constraint we need to instrument conditional jumps. Algorithm 3.1 in Chapter
3 we described the process of instrumenting a conditional jump instruction. We insert two callbacks for each
conditional jump. One on the path resulting from a true condition and one on the path resulting from a
false condition.
49
Listing 4.6: “Simulating a mov instruction”
1 VOID
2 x86Simulator::simMov_MR(UINT32 regId, ADDRINT memR, ADDRINT memRSize, THREADID id, ADDRINT pc)
3 {
4 SourceInfo si;
5
6 // If the source location is not tainted then untaint the destination
7 if (!tmgr.isMemLocTainted(memR, memRSize)) {
8 tmgr.unTaintReg(regId, id);
9 return;
10 }
11
12 // Set the information on the source operand
13 si.type = MEMORY;
14 // The mov instruction reads from address memR
15 si.loc.addr = memR;
16
17 vector sources;
18 sources.push_back(si);
19
20 TaintInfoPtr tiPtr = tmgr.createNewTaintInfo(sources, (unsigned)memRSize,
21 DIR_COPY, X_ASSIGN, 0);
22 tmgr.updateTaintInfoR(regId, tiPtr, id);

MSc Computer Science Dissertation


Introduction
1.1 Introduction
In this work we will consider the problem of automatic generation of exploits for software vulnerabilities. We
provide a formal definition for the term “exploit” in Chapter 2 but, informally, we can describe an exploit
as a program input that results in the execution of malicious code1. We define malicious code as a sequence
of bytes injected by an attacker into the program that subverts the security of the targeted system. This is
typically called shellcode. Exploits of this kind often take advantage of programmer errors relating to memory
management or variable typing in applications developed in C and C++. These errors can lead to bu er
overflows in which too much data is written to a memory bu er, resulting in the corruption of unintended
memory locations. An exploit will leverage this corruption to manipulate sensitive memory locations with
the aim of hijacking the control flow of the application.
Such exploits are typically built by hand and require manual analysis of the control flow of the application
and the manipulations it performs on input data. In applications that perform complex arithmetic
modifications or impose extensive conditions on the input this is a very di cult task. The task resembles
many problems to which automated program analysis techniques have been already been successfully applied
[38, 27, 14, 43, 29, 9, 10, 15]. Much of this research describes systems that consist of data-flow analysis in
combination with a decision procedure. Our approach extends techniques previously used in the context of
other program analysis problems and also encompasses a number of new algorithms for situations unique to
exploit generation.
1.2 Motivation
Due to constraints on time and programmer e ort it is necessary to triage software bugs into those that
are serious versus those that are relatively benign. In many cases security vulnerabilities are of critical
importance but it can be di cult to decide whether a bug is usable by an attacker for malicious purposes or
not. Crafting an exploit for a bug is often the only way to reliably determine if it is a security vulnerability.
This is not always feasible though as it can be a time consuming activity and requires low-level knowledge
of file formats, assembly code, operating system internals and CPU architecture. Without a mechanism
to create exploits developers risk misclassifying bugs. Classifying a security-relevant bug incorrectly could
result in customers being exposed to the risk for an extended period of time. On the other hand, classifying
a benign bug as security-relevant could slow down the development process and cause extensive delays as it
is investigated. As a result, there has been an increasing interest into techniques applicable to Automatic
Exploit Generation (AEG).
1We consider exploits for vulnerabilities resulting from memory corruption. Such vulnerabilities are among the most common
encountered in modern software. They are typically exploited by injecting malicious code and then redirecting execution to
that code. Other vulnerabililty types, such as those relating to design flaws or logic problems, are not considered here.
3
The challenge of AEG is to construct a program input that results in the execution of shellcode. As the
starting point for our approach we have decided to use a program input that is known to cause a crash.
Modern automated testing methods routinely generate many of these inputs in a testing session, each of
which must be manually inspected in order to determine the severity of the underlying bug.
Previous research on automated exploit generation has addressed the problem of generating inputs that
corrupt the CPU’s instruction pointer. This research is typically criticised by pointing out that crashing a
program is not the same as exploiting it [1]. Therefore, we believe it is necessary to take the AEG process a
step further and generate inputs that not only corrupt the instruction pointer but result in the execution of
shellcode. The primary aim of this work is to clarify the problems that are encountered when automatically
generating exploits that fit this description and to present the solutions we have developed.
We perform data-flow analysis over the path executed as a result of supplying a crash-causing input
to the program under test. The information gathered during data-flow analysis is then used to generate
propositional formulae that constrain the input to values that result in the execution of shellcode. We
motivate this approach by the observation that at a high level we are trying to answer the question “Is it
possible to change the test input in such a way that it executes attacker specified code?”. At its core, this
problem involves analysing how data is moved through program memory and what constraints are imposed
on it by conditional statements in the code.
1.3 Related Work
Previous work can be categorised by their approaches to data-flow analysis and their final result. On one
side is research based on techniques from program analysis and verification. These projects typically use
dynamic run-time instrumentation to perform data-flow analysis and then build formulae describing the
programs execution. While several papers have discussed how to use such techniques to corrupt the CPU’s
instruction pointer they do not discuss how this corruption is exploited to execute shellcode. Significant
challenges are encountered when one attempts to take this step from crashing the program to execution of
shellcode.
Alternatives to the above approach are demonstrated in tools from the security community [37, 28] that
use ad-hoc pattern matching in memory to relate the test input to the memory layout of the program at the
time of the crash. An exploit is then typically generated by using this information to complete a template.
This approach su ers from a number of problems as it ignores modifications and constraints applied to
program input. As a result it can produce both false positives and false negatives, without any information
as to why the exploit failed to work or failed to be generated.
The following are papers that deal directly with the problem of generating exploits:
(i) Automatic Patch-Based Exploit Generation is Possible: Techniques and Implications - This paper [11]
is the closest academic paper, in terms of subject matter, to our work. An approach is proposed and
demonstrated that takes a program P and a patched version P0, and produces a sample input for P
that exercises the vulnerability patched in P0. Using the assumption that any new constraints added
by the patched version relate to the vulnerability they generate an input that violates these constraints
but passes all others along a path to the vulnerability point (e.g. the first out of bounds write). The
expected result of providing such an input to P is that it will trigger the vulnerability. Their approach
works on binary executables, using data-flow analysis to derive a path condition and then solving such
conditions using the decision procedure STP to produce a new program input.
As the generated program input is designed to violate the added constraints it will likely cause a
crash due to some form of memory corruption. The possibility of generating an exploit that results
in shellcode execution is largely ignored. In the evaluation a specific case in which the control flow
was successfully hijacked is given, but no description of how this would be automatically achieved is
described.
(ii) Convicting Exploitable Software Vulnerabilities: An E cient Input Provenance Based Approach - This
paper [35] again focuses on exploit generation but uses a “suspect input” as its starting point instead
4
of the di erences between two program binaries. Once again data-flow analysis is used to build a path
condition which is then used to generate a new input using a decision procedure. User interaction is
required to specify how to mutate input to meet certain path conditions. As in the previous case,
the challenges and benefits involved in generating an exploit that result in shellcode execution are not
discussed.
(iii) Byakugan - Byakugan [28] is an extension for the Windows debugger, WinDbg, that can search through
program memory attempt to match sequences of bytes from an input to those found in memory. It
can work with the Metasploit [39] tool to assist in generation of exploits. In terms of the desired end
result, this is similar to our approach although it su ers from the limitations of pattern matching.
When searching in memory the tool accounts for common modification to data such as converting to
upper/lower case and unicode encoding but will miss all others. It makes no attempt at tracking path
conditions and as a result can o er no guarantees on what parts of the input are safe to change and
still trigger the vulnerability.
(iv) Automated Exploit Development, The future of exploitation is here - This document [37] is a whitepaper
describing the techniques used in the Prototype-8 tool for automated exploit generation. The generation
of control flow hijacking exploits is the focus of the tool. This is achieved by attaching a debugger to
a running process and monitoring its execution for erroneous events as test cases are delivered to the
program. When such an event occurs the tool follows a static set of rules to create an exploit based
on what type of vulnerability was discovered (i.e. it distinguishes between stack and heap overflows).
These rules attempt to determine what parts of the input data overwrote what sensitive data and hence
may be used to gain control of the program execution. Once this is determined these values are used to
generate an exploit based on a template for the vulnerability type. No attempt is made to determine
constraints that may exist on this input or to customise the exploit template to pass these constraints.
(v) Automatic Discovery of API-Level Exploits - In this paper [25] a framework is presented to model the
details of the APIs provided by functions such as printf. Once the e ects of these API features have
been formalised they can be used in predicates to specifying conditions required for an exploit. These
predicates can then be automatically solved to provide API call sequences that exploit a vulnerability.
This approach is restricted to creating exploits where all required memory corruption can be introduced
via a single API, such as printf.
As well as the above papers, the BitBlaze project [50] has resulted in a number of papers that do not
deal explicitly with the generation of exploits but do solve related problems. Approaching the issue of
automatically generating signatures for vulnerabilities [9, 10] they describe a number of useful techniques
for gathering constraints up to a particular vulnerability point and using these constraints to describe data
that might constitute an exploit.
There is also extensive previous work on data-flow analysis, taint propagation, constraint solving and
symbolic execution. Combinations of these techniques to other ends, such as vulnerability discovery [27, 14],
dynamic exploit detection [43] and general program analysis [29] are now common.
1.4 Thesis
Our thesis is as follows:
Given an executable program and an input that causes it to crash there exists a sound algorithm to determine
if a control flow hijacking exploit is possible. If a control flow hijacking exploit is possible there exists
an algorithm that will automatically generate this exploit.
The purpose of this work is to investigate the above thesis and attempt to discover and implement a
satisfying algorithm. Due to the sheer number of ways in which a program may crash, and a vulnerability be
5
exploited, it is necessary to limit our research to a subset of the possible exploit types. In our investigation
we impose the following practical limits2:
1. Data derived from user input corrupts a stored instruction pointer, function pointer or the destination
location and source value of a write instruction.
2. Address space layout randomisation may be enabled on the system but no other exploit prevention
mechanisms are in place.
3. Shellcode is not automatically generated and must be provided to the exploit generation algorithm.

Saturday, August 7, 2010

How To Write A Dissertation


So, you are preparing to write a Ph.D. dissertation in an experimental area of Computer Science. Unless you have written many formal documents before, you are in for a surprise: it's difficult!

There are two possible paths to success:

    • Planning Ahead.

      Few take this path. The few who do leave the University so quickly that they are hardly noticed. If you want to make a lasting impression and have a long career as a graduate student, do not choose it.

    • Perseverance.

      All you really have to do is outlast your doctoral committee. The good news is that they are much older than you, so you can guess who will eventually expire first. The bad news is that they are more practiced at this game (after all, they persevered in the face of their doctoral committee, didn't they?).

Here are a few guidelines that may help you when you finally get serious about writing. The list goes on forever; you probably won't want to read it all at once. But, please read it before you write anything.


The General Idea:
  1. A thesis is a hypothesis or conjecture.

  2. A PhD dissertation is a lengthy, formal document that argues in defense of a particular thesis. (So many people use the term ``thesis'' to refer to the document that a current dictionary now includes it as the third meaning of ``thesis'').

  3. Two important adjectives used to describe a dissertation are ``original'' and ``substantial.'' The research performed to support a thesis must be both, and the dissertation must show it to be so. In particular, a dissertation highlights original contributions.

  4. The scientific method means starting with a hypothesis and then collecting evidence to support or deny it. Before one can write a dissertation defending a particular thesis, one must collect evidence that supports it. Thus, the most difficult aspect of writing a dissertation consists of organizing the evidence and associated discussions into a coherent form.

  5. The essence of a dissertation is critical thinking, not experimental data. Analysis and concepts form the heart of the work.

  6. A dissertation concentrates on principles: it states the lessons learned, and not merely the facts behind them.

  7. In general, every statement in a dissertation must be supported either by a reference to published scientific literature or by original work. Moreover, a dissertation does not repeat the details of critical thinking and analysis found in published sources; it uses the results as fact and refers the reader to the source for further details.

  8. Each sentence in a dissertation must be complete and correct in a grammatical sense. Moreover, a dissertation must satisfy the stringent rules of formal grammar (e.g., no contractions, no colloquialisms, no slurs, no undefined technical jargon, no hidden jokes, and no slang, even when such terms or phrases are in common use in the spoken language). Indeed, the writing in a dissertaton must be crystal clear. Shades of meaning matter; the terminology and prose must make fine distinctions. The words must convey exactly the meaning intended, nothing more and nothing less.

  9. Each statement in a dissertation must be correct and defensible in a logical and scientific sense. Moreover, the discussions in a dissertation must satisfy the most stringent rules of logic applied to mathematics and science.

What One Should Learn From The Exercise:

  1. All scientists need to communicate discoveries; the PhD dissertation provides training for communication with other scientists.

  2. Writing a dissertation requires a student to think deeply, to organize technical discussion, to muster arguments that will convince other scientists, and to follow rules for rigorous, formal presentation of the arguments and discussion.

A Rule Of Thumb:

    Good writing is essential in a dissertation. However, good writing cannot compensate for a paucity of ideas or concepts. Quite the contrary, a clear presentation always exposes weaknesses.

Definitions And Terminology:

  1. Each technical term used in a dissertation must be defined either by a reference to a previously published definition (for standard terms with their usual meaning) or by a precise, unambiguous definition that appears before the term is used (for a new term or a standard term used in an unusual way).

  2. Each term should be used in one and only one way throughout the dissertation.

  3. The easiest way to avoid a long series of definitions is to include a statement: ``the terminology used throughout this document follows that given in [CITATION].'' Then, only define exceptions.

  4. The introductory chapter can give the intuition (i.e., informal definitions) of terms provided they are defined more precisely later.

Terms And Phrases To Avoid:

  • adverbs
      Mostly, they are very often overly used. Use strong words instead. For example, one could say, ``Writers abuse adverbs.''
  • jokes or puns
      They have no place in a formal document.
  • ``bad'', ``good'', ``nice'', ``terrible'', ``stupid''
      A scientific dissertation does not make moral judgements. Use ``incorrect/correct'' to refer to factual correctness or errors. Use precise words or phrases to assess quality (e.g., ``method A requires less computation than method B''). In general, one should avoid all qualitative judgements.
  • ``true'', ``pure'',
      In the sense of ``good'' (it is judgemental).
  • ``perfect''
      Nothing is.
  • ``an ideal solution''
      You're judging again.
  • ``today'', ``modern times''
      Today is tomorrow's yesterday.
  • ``soon''
      How soon? Later tonight? Next decade?
  • ``we were surprised to learn...''
      Even if you were, so what?
  • ``seems'', ``seemingly'',
      It doesn't matter how something appears;
  • ``would seem to show''
      all that matters are the facts.
  • ``in terms of''
      usually vague
  • ``based on'', ``X-based'', ``as the basis of''
      careful; can be vague
  • ``different''
      Does not mean ``various''; different than what?
  • ``in light of''
      colloquial
  • ``lots of''
      vague & colloquial
  • ``kind of''
      vague & colloquial
  • ``type of''
      vague & colloquial
  • ``something like''
      vague & colloquial
  • ``just about''
      vague & colloquial
  • ``number of''
      vague; do you mean ``some'', ``many'', or ``most''? A quantative statement is preferable.
  • ``due to''
      colloquial
  • ``probably''
      only if you know the statistical probability (if you do, state it quantatively
  • ``obviously, clearly''
      be careful: obvious/clear to everyone?
  • ``simple''
      Can have a negative connotation, as in ``simpleton''
  • ``along with''
      Just use ``with''
  • ``actually, really''
      define terms precisely to eliminate the need to clarify
  • ``the fact that''
      makes it a meta-sentence; rephrase
  • ``this'', ``that''
      As in ``This causes concern.'' Reason: ``this'' can refer to the subject of the previous sentence, the entire previous sentence, the entire previous paragraph, the entire previous section, etc. More important, it can be interpreted in the concrete sense or in the meta-sense. For example, in: ``X does Y. This means ...'' the reader can assume ``this'' refers to Y or to the fact that X does it. Even when restricted (e.g., ``this computation...''), the phrase is weak and often ambiguous.
  • ``You will read about...''
      The second person has no place in a formal dissertation.
  • ``I will describe...''
      The first person has no place in a formal dissertation. If self-reference is essential, phrase it as ``Section 10 describes...''
  • ``we'' as in ``we see that''
      A trap to avoid. Reason: almost any sentence can be written to begin with ``we'' because ``we'' can refer to: the reader and author, the author and advisor, the author and research team, experimental computer scientists, the entire computer science community, the science community, or some other unspecified group.
  • ``Hopefully, the program...''
      Computer programs don't hope, not unless they implement AI systems. By the way, if you are writing an AI thesis, talk to someone else: AI people have their own system of rules.
  • ``...a famous researcher...''
      It doesn't matter who said it or who did it. In fact, such statements prejudice the reader.
  • Be Careful When Using ``few, most, all, any, every''.
      A dissertation is precise. If a sentence says ``Most computer systems contain X'', you must be able to defend it. Are you sure you really know the facts? How many computers were built and sold yesterday?
  • ``must'', ``always''
      Absolutely?
  • ``should''
      Who says so?
  • ``proof'', ``prove''
      Would a mathematician agree that it's a proof?
  • ``show''
      Used in the sense of ``prove''. To ``show'' something, you need to provide a formal proof.
  • ``can/may''
      Your mother probably told you the difference.

Voice:

    Use active constructions. For example, say ``the operating system starts the device'' instead of ``the device is started by the operating system.''

Tense:

    Write in the present tense. For example, say ``The system writes a page to the disk and then uses the frame...'' instead of ``The system will use the frame after it wrote the page to disk...''

Define Negation Early:

    Example: say ``no data block waits on the output queue'' instead of ``a data block awaiting output is not on the queue.''

Grammar And Logic:

    Be careful that the subject of each sentence really does what the verb says it does. Saying ``Programs must make procedure calls using the X instruction'' is not the same as saying ``Programs must use the X instruction when they call a procedure.'' In fact, the first is patently false! Another example: ``RPC requires programs to transmit large packets'' is not the same as ``RPC requires a mechanism that allows programs to transmit large packets.''

    All computer scientists should know the rules of logic. Unfortunately the rules are more difficult to follow when the language of discourse is English instead of mathematical symbols. For example, the sentence ``There is a compiler that translates the N languages by...'' means a single compiler exists that handles all the languages, while the sentence ``For each of the N languages, there is a compiler that translates...'' means that there may be 1 compiler, 2 compilers, or N compilers. When written using mathematical symbols, the difference are obvious because ``for all'' and ``there exists'' are reversed.

Focus On Results And Not The People/Circumstances In Which They Were Obtained:

    ``After working eight hours in the lab that night, we realized...'' has no place in the dissertation. It doesn't matter when you realized it or how long you worked to obtain the answer. Another example: ``Jim and I arrived at the numbers shown in Table 3 by measuring...'' Put an acknowledgement to Jim in the dissertation, but do not include names (even your own) in the main body. You may be tempted to document a long series of experiments that produced nothing or a coincidence that resulted in success. Avoid it completely. In particular, do not document seemingly mystical influences (e.g., ``if that cat had not crawled through the hole in the floor, we might not have discovered the power supply error indicator on the network bridge''). Never attribute such events to mystical causes or imply that strange forces may have affected your results. Summary: stick to the plain facts. Describe the results without dwelling on your reactions or events that helped you achieve them.

Avoid Self-Assessment (both praise and criticism):

    Both of the following examples are incorrect: ``The method outlined in Section 2 represents a major breakthrough in the design of distributed systems because...'' ``Although the technique in the next section is not earthshaking,...''

References To Extant Work:

    One always cites papers, not authors. Thus, one uses a singular verb to refer to a paper even though it has multiple authors. For example ``Johnson and Smith [J&S90] reports that...''

    Avoid the phrase ``the authors claim that X''. The use of ``claim'' casts doubt on ``X'' because it references the authors' thoughts instead of the facts. If you agree ``X'' is correct, simply state ``X'' followed by a reference. If one absolutely must reference a paper instead of a result, say ``the paper states that...'' or ``Johnson and Smith [J&S 90] presents evidence that...''.

Concept Vs. Instance:

    A reader can become confused when a concept and an instance of it are blurred. Common examples include: an algorithm and a particular program that implements it, a programming language and a compiler, a general abstraction and its particular implementation in a computer system, a data structure and a particular instance of it in memory.

Terminology For Concepts And Abstractions

    When defining the terminology for a concept, be careful to decide precisely how the idea translates to an implementation. Consider the following discussion:

    VM systems include a concept known as an address space. The system dynamically creates an address space when a program needs one, and destroys an address space when the program that created the space has finished using it. A VM system uses a small, finite number to identify each address space. Conceptually, one understands that each new address space should have a new identifier. However, if a VM system executes so long that it exhausts all possible address space identifiers, it must reuse a number.

    The important point is that the discussion only makes sense because it defines ``address space'' independently from ``address space identifier''. If one expects to discuss the differences between a concept and its implementation, the definitions must allow such a distinction.

Knowledge Vs. Data

    The facts that result from an experiment are called ``data''. The term ``knowledge'' implies that the facts have been analyzed, condensed, or combined with facts from other experiments to produce useful information.

Cause and Effect:

    A dissertation must carefully separate cause-effect relationships from simple statistical correlations. For example, even if all computer programs written in Professor X's lab require more memory than the computer programs written in Professor Y's lab, it may not have anything to do with the professors or the lab or the programmers (e.g., maybe the people working in professor X's lab are working on applications that require more memory than the applications in professor Y's lab).

Drawing Only Warranted Conclusions:

    One must be careful to only draw conclusions that the evidence supports. For example, if programs run much slower on computer A than on computer B, one cannot conclude that the processor in A is slower than the processor in B unless one has ruled out all differences in the computers' operating systems, input or output devices, memory size, memory cache, or internal bus bandwidth. In fact, one must still refrain from judgement unless one has the results from a controlled experiment (e.g., running a set of several programs many times, each when the computer is otherwise idle). Even if the cause of some phenomenon seems obvious, one cannot draw a conclusion without solid, supporting evidence.

Commerce and Science:

    In a scientific dissertation, one never draws conclusions about the economic viability or commercial success of an idea/method, nor does one speculate about the history of development or origins of an idea. A scientist must remain objective about the merits of an idea independent of its commercial popularity. In particular, a scientist never assumes that commercial success is a valid measure of merit (many popular products are neither well-designed nor well-engineered). Thus, statements such as ``over four hundred vendors make products using technique Y'' are irrelevant in a dissertation.

Politics And Science:

    A scientist avoids all political influence when assessing ideas. Obviously, it should not matter whether government bodies, political parties, religious groups, or other organizations endorse an idea. More important and often overlooked, it does not matter whether an idea originated with a scientist who has already won a Nobel prize or a first-year graduate student. One must assess the idea independent of the source.

Canonical Organization:

    In general, every dissertation must define the problem that motivated the research, tell why that problem is important, tell what others have done, describe the new contribution, document the experiments that validate the contribution, and draw conclusions. There is no canonical organization for a dissertation; each is unique. However, novices writing a dissertation in the experimental areas of CS may find the following example a good starting point:
    • Chapter 1: Introduction

        An overview of the problem; why it is important; a summary of extant work and a statement of your hypothesis or specific question to be explored. Make it readable by anyone.
    • Chapter 2: Definitions

        New terms only. Make the definitions precise, concise, and unambiguous.
    • Chapter 3: Conceptual Model

        Describe the central concept underlying your work. Make it a ``theme'' that ties together all your arguments. It should provide an answer to the question posed in the introduction at a conceptual level. If necessary, add another chapter to give additional reasoning about the problem or its solution.
    • Chapter 4: Experimental Measurements

        Describe the results of experiments that provide evidence in support of your thesis. Usually experiments either emphasize proof-of-concept (demonstrating the viability of a method/technique) or efficiency (demonstrating that a method/technique provides better performance than those that exist).
    • Chapter 5: Corollaries And Consequences

        Describe variations, extensions, or other applications of the central idea.
    • Chapter 6: Conclusions

        Summarize what was learned and how it can be applied. Mention the possibilities for future research.
    • Abstract:

        A short (few paragraphs) summary of the the dissertation. Describe the problem and the research approach. Emphasize the original contributions.

Suggested Order For Writing:

    The easiest way to build a dissertation is inside-out. Begin by writing the chapters that describe your research (3, 4, and 5 in the above outline). Collect terms as they arise and keep a definition for each. Define each technical term, even if you use it in a conventional manner.

    Organize the definitions into a separate chapter. Make the definitions precise and formal. Review later chapters to verify that each use of a technical term adheres to its definition. After reading the middle chapters to verify terminology, write the conclusions. Write the introduction next. Finally, complete an abstract.

Key To Success:

    By the way, there is a key to success: practice. No one ever learned to write by reading essays like this. Instead, you need to practice, practice, practice. Every day.

Thursday, August 5, 2010

A master’s degree thesis


Nobody will argue that a master’s degree thesis is a daunting and long work, and it is an obligatory requirement if you want to get a diploma. However, do not hurry to panic. With some planning and organization skills, almost all students manage to complete their master’s degree theses successfully.

Do you want to know more about these techniques? Do you want to know the main secrets of a successful thesis that can bring you a master’s degree? We are glad to share them!

Successful master’s degree thesis: secret 1

Since you are going to deal with a really long work, start as early as possible. Procrastination is one the leading failure factors.

Successful master’s degree thesis: secret 2

Before getting down to work, make sure you know all the specific requirements of your institution. As a rule, almost all master’s degree theses are organized according to a similar pattern. However, many institutions have their own specific requirements that you definitely have to follow.

Successful master’s degree thesis: secret 3

Do not wait until your master’s degree thesis is finished to submit it for review. Better do it each time a new section of your project is done. First, it is easier for your advisor to check your thesis in chunks. Second, you will have less work if you correct mistakes just in one chapter, but not in the whole project.

Successful master’s degree thesis: secret 4

Pay special attention to the literature review section of your master thesis. Do you remember the main purpose of completing this huge project? You have to demonstrate an in-depth understanding of the chosen field of study. A perfect literature review is the most effective way to do it.

Successful master’s degree thesis: secret 5

Keep all materials (documents, photographs, statistics, etc.) that you use for writing your master’s degree thesis in one place. You will have to use some of them to make appendices.

Our writers can also help you get ready for a thesis defense.


Dissertation paper usually takes much time and research to complete. It is understandable that students use other ways to succeed, such as purchasing a custom dissertation/thesis paper. Perfectly written, it will give you spare time to catch up with other study courses.

Do not fool yourself with cheap services. Buy a custom paper written in accordance with your specific instructions. Pay via PayPal. Order your paper and get a free plagiarism report as a sign of our goodwill. Get such longed-for help wit your studies!

Space changing with time


Think of a very large ball. Even though you look at the ball in three space dimensions, the outer surface of the ball has the

Content on this page requires a newer version of Adobe Flash Player.

Get Adobe Flash player

geometry of a sphere in two dimensions, because there are only two independent directions of motion along the surface. If you were very small and lived on the surface of the ball you might think you weren't on a ball at all, but on a big flat two-dimensional plane. But if you were to carefully measure distances on the sphere, you would discover that you were not living on a flat surface but on the curved surface of a large sphere.
The idea of the curvature of the surface of the ball can apply to the whole Universe at once. That was the great breakthrough in Einstein's theory of general relativity. Space and time are unified into a single geometric entity called spacetime, and the spacetime has a geometry, spacetime can be curved just like the surface of a large ball is curved.
When you look at or feel the surface of a large ball as a whole thing, you are experiencing the whole space of a sphere at once. The way mathematicians prefer to define the surface of that sphere is to describe the entire sphere, not just a part of it. One of the tricky aspects of describing a spacetime geometry is that we need to describe the whole of space and the whole of time. That means everywhere and forever at once. Spacetime geometry is the geometry of all space and all time together as one mathematical entity.

What determines spacetime geometry?

Physicists generally work by looking for the equations of motion whose solutions best describe the system they want to describe. The Einstein equation is the classical equation of motion for spacetime. It's a classical equation of motion because quantum behavior is never considered. The geometry of spacetime is treated as being classically certain, without any fuzzy quantum probabilities. For this reason, it is at best an approximation to the exact theory.
The Einstein equation says that the curvature in spacetime in a given direction is directly related to the energy and momentum of everything in the spacetime that isn't spacetime itself. In other words, the Einstein equation is what ties gravity to non-gravity, geometry to non-geometry. The curvature is the gravity, and all of the "other stuff" -- the electrons and quarks that make up the atoms that make up matter, the electromagnetic radiation, every particle that mediates every force that isn't gravity -- lives in the curved spacetime and at the same time determines its curvature through the Einstein equation.

What is the geometry of our spacetime?

As mentioned previously, the full description of a given spacetime includes not only all of space but also all of time. In other words, everything that ever happened and ever will happen in that spacetime.
Now, of course, if we took that too literally, we would be in trouble, because we can't keep track of every little thing that ever happened and ever will happen to change the distribution of energy and momentum in the Universe. Luckily, humans are gifted with the powers of abstraction and approximation, so we can make abstract models that approximate the real Universe fairly well at large distances, say at the scale of galactic clusters.
To solve the equations, simplifying assumptions also have to be made about the spacetime curvature. The first assumption we'll make is that spacetime can be neatly separated into space and time. This isn't always true in curved spacetime, in some cases such as around a spinning black hole, space and time get twisted together and can no longer be neatly separated. But there is no evidence that the Universe is spinning around in a way that would cause that to happen. So the assumption that all of spacetime can be described as space changing with time is well-justified.
The next important assumption, the one behind the Big Bang theory, is that at every time in the Universe, space looks the same in every direction at every point. Looking the same in every direction is called isotropic, and looking the same at every point is called homogeneous. So we're assuming that space is homogenous and isotropic. Cosmologists call this the assumption of maximal symmetry. At the large distance scales relevant to cosmology, it turns out that it's a reasonable approximation to make.
When cosmologists solve the Einstein equation for the spacetime geometry of our Universe, they consider three basic types of energy that could curve spacetime:
1. Vacuum energy
2. Radiation
3. Matter
The radiation and matter in the Universe are treated like a uniform gases with equations of state that relate pressure to density.
Once the assumptions of uniform energy sources and maximal symmetry of space have been made, the Einstein equation reduces to two ordinary differential equations that are easy to solve using basic calculus. The solutions tell us two things: the geometry of space, and how the size of space changes with time.

String Theory


hink of a guitar string that has been tuned by stretching the string under tension across the guitar. Depending on how the string is plucked and how much tension is in the string, different musical notes will be created by the string. These musical notes could be said to be excitation modes of that guitar string under tension.
. In a similar manner, in string theory, the elementary particles we observe in particle accelerators could be thought of as the "musical notes" or excitation modes of elementary strings.
. In string theory, as in guitar playing, the string must be stretched under tension in order to become excited. However, the strings in string theory are floating in spacetime, they aren't tied down to a guitar. Nonetheless, they have tension. The string tension in string theory is denoted by the quantity 1/(2 p a'), where a' is pronounced "alpha prime"and is equal to the square of the string length scale.
. If string theory is to be a theory of quantum gravity, then the average size of a string should be somewhere near the length scale of quantum gravity, called the Planck length, which is about 10-33 centimeters, or about a millionth of a billionth of a billionth of a billionth of a centimeter. Unfortunately, this means that strings are way too small to see by current or expected particle physics technology (or financing!!) and so string theorists must devise more clever methods to test the theory than just looking for little strings in particle experiments.
. String theories are classified according to whether or not the strings are required to be closed loops, and whether or not the particle spectrum includes fermions. In order to include fermions in string theory, there must be a special kind of symmetry called supersymmetry, which means for every boson (particle that transmits a force) there is a corresponding fermion (particle that makes up matter). So supersymmetry relates the particles that transmit forces to the particles that make up matter.
. Supersymmetric partners to to currently known particles have not been observed in particle experiments, but theorists believe this is because supersymmetric particles are too massive to be detected at current accelerators. Particle accelerators could be on the verge of finding evidence for high energy supersymmetry in the next decade. Evidence for supersymmetry at high energy would be compelling evidence that string theory was a good mathematical model for Nature at the smallest distance scales.

Wednesday, August 4, 2010

String theory

Think of a guitar string that has been tuned by stretching the string under tension across the guitar. Depending on how the string is plucked and how much tension is in the string, different musical notes will be created by the string. These musical notes could be said to be excitation modes of that guitar string under tension.
. In a similar manner, in string theory, the elementary particles we observe in particle accelerators could be thought of as the "musical notes" or excitation modes of elementary strings.
. In string theory, as in guitar playing, the string must be stretched under tension in order to become excited. However, the strings in string theory are floating in spacetime, they aren't tied down to a guitar. Nonetheless, they have tension. The string tension in string theory is denoted by the quantity 1/(2 p a'), where a' is pronounced "alpha prime"and is equal to the square of the string length scale.
. If string theory is to be a theory of quantum gravity, then the average size of a string should be somewhere near the length scale of quantum gravity, called the Planck length, which is about 10-33 centimeters, or about a millionth of a billionth of a billionth of a billionth of a centimeter. Unfortunately, this means that strings are way too small to see by current or expected particle physics technology (or financing!!) and so string theorists must devise more clever methods to test the theory than just looking for little strings in particle experiments.
. String theories are classified according to whether or not the strings are required to be closed loops, and whether or not the particle spectrum includes fermions. In order to include fermions in string theory, there must be a special kind of symmetry called supersymmetry, which means for every boson (particle that transmits a force) there is a corresponding fermion (particle that makes up matter). So supersymmetry relates the particles that transmit forces to the particles that make up matter.
. Supersymmetric partners to to currently known particles have not been observed in particle experiments, but theorists believe this is because supersymmetric particles are too massive to be detected at current accelerators. Particle accelerators could be on the verge of finding evidence for high energy supersymmetry in the next decade. Evidence for supersymmetry at high energy would be compelling evidence that string theory was a good mathematical model for Nature at the smallest distance scales.

Tuesday, August 3, 2010

Antimatter


Antimatter sounds like the stuff of science fiction, and it is। But it's also very real. Antimatter is created and annihilated in stars every day. Here on Earth it's harnessed for medical brain scans.

"Antimatter is around us each day, although there isn't very much of it," says Gerald Share of the Naval Research Laboratory. "It is not something that can be found by itself in a jar on a table."

So Share went looking for evidence of some in the Sun, a veritable antimatter factory, leading to new results that provide limited fresh insight into these still-mysterious particles.

Simply put, antimatter is a fundamental particle of regular matter with its electrical charge reversed. The common proton has an antimatter counterpart called the antiproton. It has the same mass but an opposite charge. The electron's counterpart is called a positron.

Antimatter particles are created in ultra high-speed collisions.

One example is when a high-energy proton in a solar flare collides with carbon, Share explained in an e-mail interview. "It can form a type of nitrogen that has too many protons relative to its number of neutrons." This makes its nucleus unstable, and a positron is emitted to stabilize the situation.

But positrons don't last long. When they hit an electron, they annihilate and produce energy.

"So the cycle is complete, and for this reason there is so little antimatter around at a given time," Share said.

The antimatter wars

To better understand the elusive nature of antimatter, we must back up to the beginning of time.

In the first seconds after the Big Bang, there was no matter, scientists suspect. Just energy. As the universe expanded and cooled, particles of regular matter and antimatter were formed in almost equal amounts.

But, theory holds, a slightly higher percentage of regular matter developed -- perhaps just one part in a million -- for unknown reasons. That was all the edge needed for regular matter to win the longest running war in the cosmos.

"When the matter and antimatter came into contact they annihilated, and only the residual amount of matter was left to form our current universe," Share says.

Antimatter was first theorized based on work done in 1928 by the physicist Paul Dirac. The positron was discovered in 1932. Science fiction writers latched onto the concept and wrote of antiworlds and antiuniverses.

Potential power

Antimatter has tremendous energy potential, if it could ever be harnessed. A solar flare in July 2002 created about a pound of antimatter, or half a kilo, according to new NASA-led research. That's enough to power the United States for two days.

Laboratory particle accelerators can produce high-energy antimatter particles, too, but only in tiny quantities. Something on the order of a billionth of a gram or less is produced every year.

Nonetheless, sci-fi writers long ago devised schemes using antimatter to power space travelers beyond light-speed. Antimatter didnt get a bad name, but it sunk into the collective consciousness as a purely fictional concept. Given some remarkable physics breakthrough, antimatter could in theory power a spacecraft. But NASA researchers say it's nothing that will happen in the foreseeable future.

Meanwhile, antimatter has proved vitally useful for medical purposes. The fleeting particles of antimatter are also created by the decay of radioactive material, which can be injected into a patient in order to perform Positron Emission Tomography, or PET scan of the brain. Here's what happens:

A positron that's produced by decay almost immediately finds an electron and annihilates into two gamma rays, Share explains. These gamma rays move in opposite directions, and by recording several of their origin points an image is produced.

Looking at the Sun

In the Sun, flares of matter accelerate already fast-moving particles, which collide with slower particles in the Sun's atmosphere, producing antimatter. Scientists had expected these collisions to happen in relatively dense regions of the solar atmosphere. If that were the case, the density would cause the antimatter to annihilate almost immediately.

Share's team examined gamma rays emitted by antimatter annihilation, as observed by NASA's RHESSI spacecraft in work led by Robert Lin of the University of California, Berkeley.

The research suggests the antimatter perhaps shuffles around, being created in one spot and destroyed in another, contrary to what scientists expect for the ephemeral particles. But the results are unclear. They could also mean antimatter is created in regions where extremely high temperatures make the particle density 1,000 times lower than what scientists expected was conducive to the process.

Details of the work will be published in Astrophysical Journal Letters on Oct. 1.

Unknowns remain

Though scientists like to see antimatter as a natural thing, much about it remains highly mysterious. Even some of the fictional portrayals of mirror-image objects have not been proven totally out of this world.

"We cannot rule out the possibility that some antimatter star or galaxy exists somewhere," Share says. "Generally it would look the same as a matter star or galaxy to most of our instruments."

Theory argues that antimatter would behave identical to regular matter gravitationally.

"However, there must be some boundary where antimatter atoms from the antimatter galaxies or stars will come into contact with normal atoms," Share notes. "When that happens a large amount of energy in the form of gamma rays would be produced. To date we have not detected these gamma rays even though there have been very sensitive instruments in space to observe them."