Monday, August 9, 2010

MSc Computer Science Dissertation


Introduction
1.1 Introduction
In this work we will consider the problem of automatic generation of exploits for software vulnerabilities. We
provide a formal definition for the term “exploit” in Chapter 2 but, informally, we can describe an exploit
as a program input that results in the execution of malicious code1. We define malicious code as a sequence
of bytes injected by an attacker into the program that subverts the security of the targeted system. This is
typically called shellcode. Exploits of this kind often take advantage of programmer errors relating to memory
management or variable typing in applications developed in C and C++. These errors can lead to bu er
overflows in which too much data is written to a memory bu er, resulting in the corruption of unintended
memory locations. An exploit will leverage this corruption to manipulate sensitive memory locations with
the aim of hijacking the control flow of the application.
Such exploits are typically built by hand and require manual analysis of the control flow of the application
and the manipulations it performs on input data. In applications that perform complex arithmetic
modifications or impose extensive conditions on the input this is a very di cult task. The task resembles
many problems to which automated program analysis techniques have been already been successfully applied
[38, 27, 14, 43, 29, 9, 10, 15]. Much of this research describes systems that consist of data-flow analysis in
combination with a decision procedure. Our approach extends techniques previously used in the context of
other program analysis problems and also encompasses a number of new algorithms for situations unique to
exploit generation.
1.2 Motivation
Due to constraints on time and programmer e ort it is necessary to triage software bugs into those that
are serious versus those that are relatively benign. In many cases security vulnerabilities are of critical
importance but it can be di cult to decide whether a bug is usable by an attacker for malicious purposes or
not. Crafting an exploit for a bug is often the only way to reliably determine if it is a security vulnerability.
This is not always feasible though as it can be a time consuming activity and requires low-level knowledge
of file formats, assembly code, operating system internals and CPU architecture. Without a mechanism
to create exploits developers risk misclassifying bugs. Classifying a security-relevant bug incorrectly could
result in customers being exposed to the risk for an extended period of time. On the other hand, classifying
a benign bug as security-relevant could slow down the development process and cause extensive delays as it
is investigated. As a result, there has been an increasing interest into techniques applicable to Automatic
Exploit Generation (AEG).
1We consider exploits for vulnerabilities resulting from memory corruption. Such vulnerabilities are among the most common
encountered in modern software. They are typically exploited by injecting malicious code and then redirecting execution to
that code. Other vulnerabililty types, such as those relating to design flaws or logic problems, are not considered here.
3
The challenge of AEG is to construct a program input that results in the execution of shellcode. As the
starting point for our approach we have decided to use a program input that is known to cause a crash.
Modern automated testing methods routinely generate many of these inputs in a testing session, each of
which must be manually inspected in order to determine the severity of the underlying bug.
Previous research on automated exploit generation has addressed the problem of generating inputs that
corrupt the CPU’s instruction pointer. This research is typically criticised by pointing out that crashing a
program is not the same as exploiting it [1]. Therefore, we believe it is necessary to take the AEG process a
step further and generate inputs that not only corrupt the instruction pointer but result in the execution of
shellcode. The primary aim of this work is to clarify the problems that are encountered when automatically
generating exploits that fit this description and to present the solutions we have developed.
We perform data-flow analysis over the path executed as a result of supplying a crash-causing input
to the program under test. The information gathered during data-flow analysis is then used to generate
propositional formulae that constrain the input to values that result in the execution of shellcode. We
motivate this approach by the observation that at a high level we are trying to answer the question “Is it
possible to change the test input in such a way that it executes attacker specified code?”. At its core, this
problem involves analysing how data is moved through program memory and what constraints are imposed
on it by conditional statements in the code.
1.3 Related Work
Previous work can be categorised by their approaches to data-flow analysis and their final result. On one
side is research based on techniques from program analysis and verification. These projects typically use
dynamic run-time instrumentation to perform data-flow analysis and then build formulae describing the
programs execution. While several papers have discussed how to use such techniques to corrupt the CPU’s
instruction pointer they do not discuss how this corruption is exploited to execute shellcode. Significant
challenges are encountered when one attempts to take this step from crashing the program to execution of
shellcode.
Alternatives to the above approach are demonstrated in tools from the security community [37, 28] that
use ad-hoc pattern matching in memory to relate the test input to the memory layout of the program at the
time of the crash. An exploit is then typically generated by using this information to complete a template.
This approach su ers from a number of problems as it ignores modifications and constraints applied to
program input. As a result it can produce both false positives and false negatives, without any information
as to why the exploit failed to work or failed to be generated.
The following are papers that deal directly with the problem of generating exploits:
(i) Automatic Patch-Based Exploit Generation is Possible: Techniques and Implications - This paper [11]
is the closest academic paper, in terms of subject matter, to our work. An approach is proposed and
demonstrated that takes a program P and a patched version P0, and produces a sample input for P
that exercises the vulnerability patched in P0. Using the assumption that any new constraints added
by the patched version relate to the vulnerability they generate an input that violates these constraints
but passes all others along a path to the vulnerability point (e.g. the first out of bounds write). The
expected result of providing such an input to P is that it will trigger the vulnerability. Their approach
works on binary executables, using data-flow analysis to derive a path condition and then solving such
conditions using the decision procedure STP to produce a new program input.
As the generated program input is designed to violate the added constraints it will likely cause a
crash due to some form of memory corruption. The possibility of generating an exploit that results
in shellcode execution is largely ignored. In the evaluation a specific case in which the control flow
was successfully hijacked is given, but no description of how this would be automatically achieved is
described.
(ii) Convicting Exploitable Software Vulnerabilities: An E cient Input Provenance Based Approach - This
paper [35] again focuses on exploit generation but uses a “suspect input” as its starting point instead
4
of the di erences between two program binaries. Once again data-flow analysis is used to build a path
condition which is then used to generate a new input using a decision procedure. User interaction is
required to specify how to mutate input to meet certain path conditions. As in the previous case,
the challenges and benefits involved in generating an exploit that result in shellcode execution are not
discussed.
(iii) Byakugan - Byakugan [28] is an extension for the Windows debugger, WinDbg, that can search through
program memory attempt to match sequences of bytes from an input to those found in memory. It
can work with the Metasploit [39] tool to assist in generation of exploits. In terms of the desired end
result, this is similar to our approach although it su ers from the limitations of pattern matching.
When searching in memory the tool accounts for common modification to data such as converting to
upper/lower case and unicode encoding but will miss all others. It makes no attempt at tracking path
conditions and as a result can o er no guarantees on what parts of the input are safe to change and
still trigger the vulnerability.
(iv) Automated Exploit Development, The future of exploitation is here - This document [37] is a whitepaper
describing the techniques used in the Prototype-8 tool for automated exploit generation. The generation
of control flow hijacking exploits is the focus of the tool. This is achieved by attaching a debugger to
a running process and monitoring its execution for erroneous events as test cases are delivered to the
program. When such an event occurs the tool follows a static set of rules to create an exploit based
on what type of vulnerability was discovered (i.e. it distinguishes between stack and heap overflows).
These rules attempt to determine what parts of the input data overwrote what sensitive data and hence
may be used to gain control of the program execution. Once this is determined these values are used to
generate an exploit based on a template for the vulnerability type. No attempt is made to determine
constraints that may exist on this input or to customise the exploit template to pass these constraints.
(v) Automatic Discovery of API-Level Exploits - In this paper [25] a framework is presented to model the
details of the APIs provided by functions such as printf. Once the e ects of these API features have
been formalised they can be used in predicates to specifying conditions required for an exploit. These
predicates can then be automatically solved to provide API call sequences that exploit a vulnerability.
This approach is restricted to creating exploits where all required memory corruption can be introduced
via a single API, such as printf.
As well as the above papers, the BitBlaze project [50] has resulted in a number of papers that do not
deal explicitly with the generation of exploits but do solve related problems. Approaching the issue of
automatically generating signatures for vulnerabilities [9, 10] they describe a number of useful techniques
for gathering constraints up to a particular vulnerability point and using these constraints to describe data
that might constitute an exploit.
There is also extensive previous work on data-flow analysis, taint propagation, constraint solving and
symbolic execution. Combinations of these techniques to other ends, such as vulnerability discovery [27, 14],
dynamic exploit detection [43] and general program analysis [29] are now common.
1.4 Thesis
Our thesis is as follows:
Given an executable program and an input that causes it to crash there exists a sound algorithm to determine
if a control flow hijacking exploit is possible. If a control flow hijacking exploit is possible there exists
an algorithm that will automatically generate this exploit.
The purpose of this work is to investigate the above thesis and attempt to discover and implement a
satisfying algorithm. Due to the sheer number of ways in which a program may crash, and a vulnerability be
5
exploited, it is necessary to limit our research to a subset of the possible exploit types. In our investigation
we impose the following practical limits2:
1. Data derived from user input corrupts a stored instruction pointer, function pointer or the destination
location and source value of a write instruction.
2. Address space layout randomisation may be enabled on the system but no other exploit prevention
mechanisms are in place.
3. Shellcode is not automatically generated and must be provided to the exploit generation algorithm.

Saturday, August 7, 2010

How To Write A Dissertation


So, you are preparing to write a Ph.D. dissertation in an experimental area of Computer Science. Unless you have written many formal documents before, you are in for a surprise: it's difficult!

There are two possible paths to success:

    • Planning Ahead.

      Few take this path. The few who do leave the University so quickly that they are hardly noticed. If you want to make a lasting impression and have a long career as a graduate student, do not choose it.

    • Perseverance.

      All you really have to do is outlast your doctoral committee. The good news is that they are much older than you, so you can guess who will eventually expire first. The bad news is that they are more practiced at this game (after all, they persevered in the face of their doctoral committee, didn't they?).

Here are a few guidelines that may help you when you finally get serious about writing. The list goes on forever; you probably won't want to read it all at once. But, please read it before you write anything.


The General Idea:
  1. A thesis is a hypothesis or conjecture.

  2. A PhD dissertation is a lengthy, formal document that argues in defense of a particular thesis. (So many people use the term ``thesis'' to refer to the document that a current dictionary now includes it as the third meaning of ``thesis'').

  3. Two important adjectives used to describe a dissertation are ``original'' and ``substantial.'' The research performed to support a thesis must be both, and the dissertation must show it to be so. In particular, a dissertation highlights original contributions.

  4. The scientific method means starting with a hypothesis and then collecting evidence to support or deny it. Before one can write a dissertation defending a particular thesis, one must collect evidence that supports it. Thus, the most difficult aspect of writing a dissertation consists of organizing the evidence and associated discussions into a coherent form.

  5. The essence of a dissertation is critical thinking, not experimental data. Analysis and concepts form the heart of the work.

  6. A dissertation concentrates on principles: it states the lessons learned, and not merely the facts behind them.

  7. In general, every statement in a dissertation must be supported either by a reference to published scientific literature or by original work. Moreover, a dissertation does not repeat the details of critical thinking and analysis found in published sources; it uses the results as fact and refers the reader to the source for further details.

  8. Each sentence in a dissertation must be complete and correct in a grammatical sense. Moreover, a dissertation must satisfy the stringent rules of formal grammar (e.g., no contractions, no colloquialisms, no slurs, no undefined technical jargon, no hidden jokes, and no slang, even when such terms or phrases are in common use in the spoken language). Indeed, the writing in a dissertaton must be crystal clear. Shades of meaning matter; the terminology and prose must make fine distinctions. The words must convey exactly the meaning intended, nothing more and nothing less.

  9. Each statement in a dissertation must be correct and defensible in a logical and scientific sense. Moreover, the discussions in a dissertation must satisfy the most stringent rules of logic applied to mathematics and science.

What One Should Learn From The Exercise:

  1. All scientists need to communicate discoveries; the PhD dissertation provides training for communication with other scientists.

  2. Writing a dissertation requires a student to think deeply, to organize technical discussion, to muster arguments that will convince other scientists, and to follow rules for rigorous, formal presentation of the arguments and discussion.

A Rule Of Thumb:

    Good writing is essential in a dissertation. However, good writing cannot compensate for a paucity of ideas or concepts. Quite the contrary, a clear presentation always exposes weaknesses.

Definitions And Terminology:

  1. Each technical term used in a dissertation must be defined either by a reference to a previously published definition (for standard terms with their usual meaning) or by a precise, unambiguous definition that appears before the term is used (for a new term or a standard term used in an unusual way).

  2. Each term should be used in one and only one way throughout the dissertation.

  3. The easiest way to avoid a long series of definitions is to include a statement: ``the terminology used throughout this document follows that given in [CITATION].'' Then, only define exceptions.

  4. The introductory chapter can give the intuition (i.e., informal definitions) of terms provided they are defined more precisely later.

Terms And Phrases To Avoid:

  • adverbs
      Mostly, they are very often overly used. Use strong words instead. For example, one could say, ``Writers abuse adverbs.''
  • jokes or puns
      They have no place in a formal document.
  • ``bad'', ``good'', ``nice'', ``terrible'', ``stupid''
      A scientific dissertation does not make moral judgements. Use ``incorrect/correct'' to refer to factual correctness or errors. Use precise words or phrases to assess quality (e.g., ``method A requires less computation than method B''). In general, one should avoid all qualitative judgements.
  • ``true'', ``pure'',
      In the sense of ``good'' (it is judgemental).
  • ``perfect''
      Nothing is.
  • ``an ideal solution''
      You're judging again.
  • ``today'', ``modern times''
      Today is tomorrow's yesterday.
  • ``soon''
      How soon? Later tonight? Next decade?
  • ``we were surprised to learn...''
      Even if you were, so what?
  • ``seems'', ``seemingly'',
      It doesn't matter how something appears;
  • ``would seem to show''
      all that matters are the facts.
  • ``in terms of''
      usually vague
  • ``based on'', ``X-based'', ``as the basis of''
      careful; can be vague
  • ``different''
      Does not mean ``various''; different than what?
  • ``in light of''
      colloquial
  • ``lots of''
      vague & colloquial
  • ``kind of''
      vague & colloquial
  • ``type of''
      vague & colloquial
  • ``something like''
      vague & colloquial
  • ``just about''
      vague & colloquial
  • ``number of''
      vague; do you mean ``some'', ``many'', or ``most''? A quantative statement is preferable.
  • ``due to''
      colloquial
  • ``probably''
      only if you know the statistical probability (if you do, state it quantatively
  • ``obviously, clearly''
      be careful: obvious/clear to everyone?
  • ``simple''
      Can have a negative connotation, as in ``simpleton''
  • ``along with''
      Just use ``with''
  • ``actually, really''
      define terms precisely to eliminate the need to clarify
  • ``the fact that''
      makes it a meta-sentence; rephrase
  • ``this'', ``that''
      As in ``This causes concern.'' Reason: ``this'' can refer to the subject of the previous sentence, the entire previous sentence, the entire previous paragraph, the entire previous section, etc. More important, it can be interpreted in the concrete sense or in the meta-sense. For example, in: ``X does Y. This means ...'' the reader can assume ``this'' refers to Y or to the fact that X does it. Even when restricted (e.g., ``this computation...''), the phrase is weak and often ambiguous.
  • ``You will read about...''
      The second person has no place in a formal dissertation.
  • ``I will describe...''
      The first person has no place in a formal dissertation. If self-reference is essential, phrase it as ``Section 10 describes...''
  • ``we'' as in ``we see that''
      A trap to avoid. Reason: almost any sentence can be written to begin with ``we'' because ``we'' can refer to: the reader and author, the author and advisor, the author and research team, experimental computer scientists, the entire computer science community, the science community, or some other unspecified group.
  • ``Hopefully, the program...''
      Computer programs don't hope, not unless they implement AI systems. By the way, if you are writing an AI thesis, talk to someone else: AI people have their own system of rules.
  • ``...a famous researcher...''
      It doesn't matter who said it or who did it. In fact, such statements prejudice the reader.
  • Be Careful When Using ``few, most, all, any, every''.
      A dissertation is precise. If a sentence says ``Most computer systems contain X'', you must be able to defend it. Are you sure you really know the facts? How many computers were built and sold yesterday?
  • ``must'', ``always''
      Absolutely?
  • ``should''
      Who says so?
  • ``proof'', ``prove''
      Would a mathematician agree that it's a proof?
  • ``show''
      Used in the sense of ``prove''. To ``show'' something, you need to provide a formal proof.
  • ``can/may''
      Your mother probably told you the difference.

Voice:

    Use active constructions. For example, say ``the operating system starts the device'' instead of ``the device is started by the operating system.''

Tense:

    Write in the present tense. For example, say ``The system writes a page to the disk and then uses the frame...'' instead of ``The system will use the frame after it wrote the page to disk...''

Define Negation Early:

    Example: say ``no data block waits on the output queue'' instead of ``a data block awaiting output is not on the queue.''

Grammar And Logic:

    Be careful that the subject of each sentence really does what the verb says it does. Saying ``Programs must make procedure calls using the X instruction'' is not the same as saying ``Programs must use the X instruction when they call a procedure.'' In fact, the first is patently false! Another example: ``RPC requires programs to transmit large packets'' is not the same as ``RPC requires a mechanism that allows programs to transmit large packets.''

    All computer scientists should know the rules of logic. Unfortunately the rules are more difficult to follow when the language of discourse is English instead of mathematical symbols. For example, the sentence ``There is a compiler that translates the N languages by...'' means a single compiler exists that handles all the languages, while the sentence ``For each of the N languages, there is a compiler that translates...'' means that there may be 1 compiler, 2 compilers, or N compilers. When written using mathematical symbols, the difference are obvious because ``for all'' and ``there exists'' are reversed.

Focus On Results And Not The People/Circumstances In Which They Were Obtained:

    ``After working eight hours in the lab that night, we realized...'' has no place in the dissertation. It doesn't matter when you realized it or how long you worked to obtain the answer. Another example: ``Jim and I arrived at the numbers shown in Table 3 by measuring...'' Put an acknowledgement to Jim in the dissertation, but do not include names (even your own) in the main body. You may be tempted to document a long series of experiments that produced nothing or a coincidence that resulted in success. Avoid it completely. In particular, do not document seemingly mystical influences (e.g., ``if that cat had not crawled through the hole in the floor, we might not have discovered the power supply error indicator on the network bridge''). Never attribute such events to mystical causes or imply that strange forces may have affected your results. Summary: stick to the plain facts. Describe the results without dwelling on your reactions or events that helped you achieve them.

Avoid Self-Assessment (both praise and criticism):

    Both of the following examples are incorrect: ``The method outlined in Section 2 represents a major breakthrough in the design of distributed systems because...'' ``Although the technique in the next section is not earthshaking,...''

References To Extant Work:

    One always cites papers, not authors. Thus, one uses a singular verb to refer to a paper even though it has multiple authors. For example ``Johnson and Smith [J&S90] reports that...''

    Avoid the phrase ``the authors claim that X''. The use of ``claim'' casts doubt on ``X'' because it references the authors' thoughts instead of the facts. If you agree ``X'' is correct, simply state ``X'' followed by a reference. If one absolutely must reference a paper instead of a result, say ``the paper states that...'' or ``Johnson and Smith [J&S 90] presents evidence that...''.

Concept Vs. Instance:

    A reader can become confused when a concept and an instance of it are blurred. Common examples include: an algorithm and a particular program that implements it, a programming language and a compiler, a general abstraction and its particular implementation in a computer system, a data structure and a particular instance of it in memory.

Terminology For Concepts And Abstractions

    When defining the terminology for a concept, be careful to decide precisely how the idea translates to an implementation. Consider the following discussion:

    VM systems include a concept known as an address space. The system dynamically creates an address space when a program needs one, and destroys an address space when the program that created the space has finished using it. A VM system uses a small, finite number to identify each address space. Conceptually, one understands that each new address space should have a new identifier. However, if a VM system executes so long that it exhausts all possible address space identifiers, it must reuse a number.

    The important point is that the discussion only makes sense because it defines ``address space'' independently from ``address space identifier''. If one expects to discuss the differences between a concept and its implementation, the definitions must allow such a distinction.

Knowledge Vs. Data

    The facts that result from an experiment are called ``data''. The term ``knowledge'' implies that the facts have been analyzed, condensed, or combined with facts from other experiments to produce useful information.

Cause and Effect:

    A dissertation must carefully separate cause-effect relationships from simple statistical correlations. For example, even if all computer programs written in Professor X's lab require more memory than the computer programs written in Professor Y's lab, it may not have anything to do with the professors or the lab or the programmers (e.g., maybe the people working in professor X's lab are working on applications that require more memory than the applications in professor Y's lab).

Drawing Only Warranted Conclusions:

    One must be careful to only draw conclusions that the evidence supports. For example, if programs run much slower on computer A than on computer B, one cannot conclude that the processor in A is slower than the processor in B unless one has ruled out all differences in the computers' operating systems, input or output devices, memory size, memory cache, or internal bus bandwidth. In fact, one must still refrain from judgement unless one has the results from a controlled experiment (e.g., running a set of several programs many times, each when the computer is otherwise idle). Even if the cause of some phenomenon seems obvious, one cannot draw a conclusion without solid, supporting evidence.

Commerce and Science:

    In a scientific dissertation, one never draws conclusions about the economic viability or commercial success of an idea/method, nor does one speculate about the history of development or origins of an idea. A scientist must remain objective about the merits of an idea independent of its commercial popularity. In particular, a scientist never assumes that commercial success is a valid measure of merit (many popular products are neither well-designed nor well-engineered). Thus, statements such as ``over four hundred vendors make products using technique Y'' are irrelevant in a dissertation.

Politics And Science:

    A scientist avoids all political influence when assessing ideas. Obviously, it should not matter whether government bodies, political parties, religious groups, or other organizations endorse an idea. More important and often overlooked, it does not matter whether an idea originated with a scientist who has already won a Nobel prize or a first-year graduate student. One must assess the idea independent of the source.

Canonical Organization:

    In general, every dissertation must define the problem that motivated the research, tell why that problem is important, tell what others have done, describe the new contribution, document the experiments that validate the contribution, and draw conclusions. There is no canonical organization for a dissertation; each is unique. However, novices writing a dissertation in the experimental areas of CS may find the following example a good starting point:
    • Chapter 1: Introduction

        An overview of the problem; why it is important; a summary of extant work and a statement of your hypothesis or specific question to be explored. Make it readable by anyone.
    • Chapter 2: Definitions

        New terms only. Make the definitions precise, concise, and unambiguous.
    • Chapter 3: Conceptual Model

        Describe the central concept underlying your work. Make it a ``theme'' that ties together all your arguments. It should provide an answer to the question posed in the introduction at a conceptual level. If necessary, add another chapter to give additional reasoning about the problem or its solution.
    • Chapter 4: Experimental Measurements

        Describe the results of experiments that provide evidence in support of your thesis. Usually experiments either emphasize proof-of-concept (demonstrating the viability of a method/technique) or efficiency (demonstrating that a method/technique provides better performance than those that exist).
    • Chapter 5: Corollaries And Consequences

        Describe variations, extensions, or other applications of the central idea.
    • Chapter 6: Conclusions

        Summarize what was learned and how it can be applied. Mention the possibilities for future research.
    • Abstract:

        A short (few paragraphs) summary of the the dissertation. Describe the problem and the research approach. Emphasize the original contributions.

Suggested Order For Writing:

    The easiest way to build a dissertation is inside-out. Begin by writing the chapters that describe your research (3, 4, and 5 in the above outline). Collect terms as they arise and keep a definition for each. Define each technical term, even if you use it in a conventional manner.

    Organize the definitions into a separate chapter. Make the definitions precise and formal. Review later chapters to verify that each use of a technical term adheres to its definition. After reading the middle chapters to verify terminology, write the conclusions. Write the introduction next. Finally, complete an abstract.

Key To Success:

    By the way, there is a key to success: practice. No one ever learned to write by reading essays like this. Instead, you need to practice, practice, practice. Every day.

Thursday, August 5, 2010

A master’s degree thesis


Nobody will argue that a master’s degree thesis is a daunting and long work, and it is an obligatory requirement if you want to get a diploma. However, do not hurry to panic. With some planning and organization skills, almost all students manage to complete their master’s degree theses successfully.

Do you want to know more about these techniques? Do you want to know the main secrets of a successful thesis that can bring you a master’s degree? We are glad to share them!

Successful master’s degree thesis: secret 1

Since you are going to deal with a really long work, start as early as possible. Procrastination is one the leading failure factors.

Successful master’s degree thesis: secret 2

Before getting down to work, make sure you know all the specific requirements of your institution. As a rule, almost all master’s degree theses are organized according to a similar pattern. However, many institutions have their own specific requirements that you definitely have to follow.

Successful master’s degree thesis: secret 3

Do not wait until your master’s degree thesis is finished to submit it for review. Better do it each time a new section of your project is done. First, it is easier for your advisor to check your thesis in chunks. Second, you will have less work if you correct mistakes just in one chapter, but not in the whole project.

Successful master’s degree thesis: secret 4

Pay special attention to the literature review section of your master thesis. Do you remember the main purpose of completing this huge project? You have to demonstrate an in-depth understanding of the chosen field of study. A perfect literature review is the most effective way to do it.

Successful master’s degree thesis: secret 5

Keep all materials (documents, photographs, statistics, etc.) that you use for writing your master’s degree thesis in one place. You will have to use some of them to make appendices.

Our writers can also help you get ready for a thesis defense.


Dissertation paper usually takes much time and research to complete. It is understandable that students use other ways to succeed, such as purchasing a custom dissertation/thesis paper. Perfectly written, it will give you spare time to catch up with other study courses.

Do not fool yourself with cheap services. Buy a custom paper written in accordance with your specific instructions. Pay via PayPal. Order your paper and get a free plagiarism report as a sign of our goodwill. Get such longed-for help wit your studies!

Space changing with time


Think of a very large ball. Even though you look at the ball in three space dimensions, the outer surface of the ball has the

Content on this page requires a newer version of Adobe Flash Player.

Get Adobe Flash player

geometry of a sphere in two dimensions, because there are only two independent directions of motion along the surface. If you were very small and lived on the surface of the ball you might think you weren't on a ball at all, but on a big flat two-dimensional plane. But if you were to carefully measure distances on the sphere, you would discover that you were not living on a flat surface but on the curved surface of a large sphere.
The idea of the curvature of the surface of the ball can apply to the whole Universe at once. That was the great breakthrough in Einstein's theory of general relativity. Space and time are unified into a single geometric entity called spacetime, and the spacetime has a geometry, spacetime can be curved just like the surface of a large ball is curved.
When you look at or feel the surface of a large ball as a whole thing, you are experiencing the whole space of a sphere at once. The way mathematicians prefer to define the surface of that sphere is to describe the entire sphere, not just a part of it. One of the tricky aspects of describing a spacetime geometry is that we need to describe the whole of space and the whole of time. That means everywhere and forever at once. Spacetime geometry is the geometry of all space and all time together as one mathematical entity.

What determines spacetime geometry?

Physicists generally work by looking for the equations of motion whose solutions best describe the system they want to describe. The Einstein equation is the classical equation of motion for spacetime. It's a classical equation of motion because quantum behavior is never considered. The geometry of spacetime is treated as being classically certain, without any fuzzy quantum probabilities. For this reason, it is at best an approximation to the exact theory.
The Einstein equation says that the curvature in spacetime in a given direction is directly related to the energy and momentum of everything in the spacetime that isn't spacetime itself. In other words, the Einstein equation is what ties gravity to non-gravity, geometry to non-geometry. The curvature is the gravity, and all of the "other stuff" -- the electrons and quarks that make up the atoms that make up matter, the electromagnetic radiation, every particle that mediates every force that isn't gravity -- lives in the curved spacetime and at the same time determines its curvature through the Einstein equation.

What is the geometry of our spacetime?

As mentioned previously, the full description of a given spacetime includes not only all of space but also all of time. In other words, everything that ever happened and ever will happen in that spacetime.
Now, of course, if we took that too literally, we would be in trouble, because we can't keep track of every little thing that ever happened and ever will happen to change the distribution of energy and momentum in the Universe. Luckily, humans are gifted with the powers of abstraction and approximation, so we can make abstract models that approximate the real Universe fairly well at large distances, say at the scale of galactic clusters.
To solve the equations, simplifying assumptions also have to be made about the spacetime curvature. The first assumption we'll make is that spacetime can be neatly separated into space and time. This isn't always true in curved spacetime, in some cases such as around a spinning black hole, space and time get twisted together and can no longer be neatly separated. But there is no evidence that the Universe is spinning around in a way that would cause that to happen. So the assumption that all of spacetime can be described as space changing with time is well-justified.
The next important assumption, the one behind the Big Bang theory, is that at every time in the Universe, space looks the same in every direction at every point. Looking the same in every direction is called isotropic, and looking the same at every point is called homogeneous. So we're assuming that space is homogenous and isotropic. Cosmologists call this the assumption of maximal symmetry. At the large distance scales relevant to cosmology, it turns out that it's a reasonable approximation to make.
When cosmologists solve the Einstein equation for the spacetime geometry of our Universe, they consider three basic types of energy that could curve spacetime:
1. Vacuum energy
2. Radiation
3. Matter
The radiation and matter in the Universe are treated like a uniform gases with equations of state that relate pressure to density.
Once the assumptions of uniform energy sources and maximal symmetry of space have been made, the Einstein equation reduces to two ordinary differential equations that are easy to solve using basic calculus. The solutions tell us two things: the geometry of space, and how the size of space changes with time.

String Theory


hink of a guitar string that has been tuned by stretching the string under tension across the guitar. Depending on how the string is plucked and how much tension is in the string, different musical notes will be created by the string. These musical notes could be said to be excitation modes of that guitar string under tension.
. In a similar manner, in string theory, the elementary particles we observe in particle accelerators could be thought of as the "musical notes" or excitation modes of elementary strings.
. In string theory, as in guitar playing, the string must be stretched under tension in order to become excited. However, the strings in string theory are floating in spacetime, they aren't tied down to a guitar. Nonetheless, they have tension. The string tension in string theory is denoted by the quantity 1/(2 p a'), where a' is pronounced "alpha prime"and is equal to the square of the string length scale.
. If string theory is to be a theory of quantum gravity, then the average size of a string should be somewhere near the length scale of quantum gravity, called the Planck length, which is about 10-33 centimeters, or about a millionth of a billionth of a billionth of a billionth of a centimeter. Unfortunately, this means that strings are way too small to see by current or expected particle physics technology (or financing!!) and so string theorists must devise more clever methods to test the theory than just looking for little strings in particle experiments.
. String theories are classified according to whether or not the strings are required to be closed loops, and whether or not the particle spectrum includes fermions. In order to include fermions in string theory, there must be a special kind of symmetry called supersymmetry, which means for every boson (particle that transmits a force) there is a corresponding fermion (particle that makes up matter). So supersymmetry relates the particles that transmit forces to the particles that make up matter.
. Supersymmetric partners to to currently known particles have not been observed in particle experiments, but theorists believe this is because supersymmetric particles are too massive to be detected at current accelerators. Particle accelerators could be on the verge of finding evidence for high energy supersymmetry in the next decade. Evidence for supersymmetry at high energy would be compelling evidence that string theory was a good mathematical model for Nature at the smallest distance scales.

Wednesday, August 4, 2010

String theory

Think of a guitar string that has been tuned by stretching the string under tension across the guitar. Depending on how the string is plucked and how much tension is in the string, different musical notes will be created by the string. These musical notes could be said to be excitation modes of that guitar string under tension.
. In a similar manner, in string theory, the elementary particles we observe in particle accelerators could be thought of as the "musical notes" or excitation modes of elementary strings.
. In string theory, as in guitar playing, the string must be stretched under tension in order to become excited. However, the strings in string theory are floating in spacetime, they aren't tied down to a guitar. Nonetheless, they have tension. The string tension in string theory is denoted by the quantity 1/(2 p a'), where a' is pronounced "alpha prime"and is equal to the square of the string length scale.
. If string theory is to be a theory of quantum gravity, then the average size of a string should be somewhere near the length scale of quantum gravity, called the Planck length, which is about 10-33 centimeters, or about a millionth of a billionth of a billionth of a billionth of a centimeter. Unfortunately, this means that strings are way too small to see by current or expected particle physics technology (or financing!!) and so string theorists must devise more clever methods to test the theory than just looking for little strings in particle experiments.
. String theories are classified according to whether or not the strings are required to be closed loops, and whether or not the particle spectrum includes fermions. In order to include fermions in string theory, there must be a special kind of symmetry called supersymmetry, which means for every boson (particle that transmits a force) there is a corresponding fermion (particle that makes up matter). So supersymmetry relates the particles that transmit forces to the particles that make up matter.
. Supersymmetric partners to to currently known particles have not been observed in particle experiments, but theorists believe this is because supersymmetric particles are too massive to be detected at current accelerators. Particle accelerators could be on the verge of finding evidence for high energy supersymmetry in the next decade. Evidence for supersymmetry at high energy would be compelling evidence that string theory was a good mathematical model for Nature at the smallest distance scales.

Tuesday, August 3, 2010

Antimatter


Antimatter sounds like the stuff of science fiction, and it is। But it's also very real. Antimatter is created and annihilated in stars every day. Here on Earth it's harnessed for medical brain scans.

"Antimatter is around us each day, although there isn't very much of it," says Gerald Share of the Naval Research Laboratory. "It is not something that can be found by itself in a jar on a table."

So Share went looking for evidence of some in the Sun, a veritable antimatter factory, leading to new results that provide limited fresh insight into these still-mysterious particles.

Simply put, antimatter is a fundamental particle of regular matter with its electrical charge reversed. The common proton has an antimatter counterpart called the antiproton. It has the same mass but an opposite charge. The electron's counterpart is called a positron.

Antimatter particles are created in ultra high-speed collisions.

One example is when a high-energy proton in a solar flare collides with carbon, Share explained in an e-mail interview. "It can form a type of nitrogen that has too many protons relative to its number of neutrons." This makes its nucleus unstable, and a positron is emitted to stabilize the situation.

But positrons don't last long. When they hit an electron, they annihilate and produce energy.

"So the cycle is complete, and for this reason there is so little antimatter around at a given time," Share said.

The antimatter wars

To better understand the elusive nature of antimatter, we must back up to the beginning of time.

In the first seconds after the Big Bang, there was no matter, scientists suspect. Just energy. As the universe expanded and cooled, particles of regular matter and antimatter were formed in almost equal amounts.

But, theory holds, a slightly higher percentage of regular matter developed -- perhaps just one part in a million -- for unknown reasons. That was all the edge needed for regular matter to win the longest running war in the cosmos.

"When the matter and antimatter came into contact they annihilated, and only the residual amount of matter was left to form our current universe," Share says.

Antimatter was first theorized based on work done in 1928 by the physicist Paul Dirac. The positron was discovered in 1932. Science fiction writers latched onto the concept and wrote of antiworlds and antiuniverses.

Potential power

Antimatter has tremendous energy potential, if it could ever be harnessed. A solar flare in July 2002 created about a pound of antimatter, or half a kilo, according to new NASA-led research. That's enough to power the United States for two days.

Laboratory particle accelerators can produce high-energy antimatter particles, too, but only in tiny quantities. Something on the order of a billionth of a gram or less is produced every year.

Nonetheless, sci-fi writers long ago devised schemes using antimatter to power space travelers beyond light-speed. Antimatter didnt get a bad name, but it sunk into the collective consciousness as a purely fictional concept. Given some remarkable physics breakthrough, antimatter could in theory power a spacecraft. But NASA researchers say it's nothing that will happen in the foreseeable future.

Meanwhile, antimatter has proved vitally useful for medical purposes. The fleeting particles of antimatter are also created by the decay of radioactive material, which can be injected into a patient in order to perform Positron Emission Tomography, or PET scan of the brain. Here's what happens:

A positron that's produced by decay almost immediately finds an electron and annihilates into two gamma rays, Share explains. These gamma rays move in opposite directions, and by recording several of their origin points an image is produced.

Looking at the Sun

In the Sun, flares of matter accelerate already fast-moving particles, which collide with slower particles in the Sun's atmosphere, producing antimatter. Scientists had expected these collisions to happen in relatively dense regions of the solar atmosphere. If that were the case, the density would cause the antimatter to annihilate almost immediately.

Share's team examined gamma rays emitted by antimatter annihilation, as observed by NASA's RHESSI spacecraft in work led by Robert Lin of the University of California, Berkeley.

The research suggests the antimatter perhaps shuffles around, being created in one spot and destroyed in another, contrary to what scientists expect for the ephemeral particles. But the results are unclear. They could also mean antimatter is created in regions where extremely high temperatures make the particle density 1,000 times lower than what scientists expected was conducive to the process.

Details of the work will be published in Astrophysical Journal Letters on Oct. 1.

Unknowns remain

Though scientists like to see antimatter as a natural thing, much about it remains highly mysterious. Even some of the fictional portrayals of mirror-image objects have not been proven totally out of this world.

"We cannot rule out the possibility that some antimatter star or galaxy exists somewhere," Share says. "Generally it would look the same as a matter star or galaxy to most of our instruments."

Theory argues that antimatter would behave identical to regular matter gravitationally.

"However, there must be some boundary where antimatter atoms from the antimatter galaxies or stars will come into contact with normal atoms," Share notes. "When that happens a large amount of energy in the form of gamma rays would be produced. To date we have not detected these gamma rays even though there have been very sensitive instruments in space to observe them."