Earth Event Alerts

Earth Event Alerts

[ IMAGE ( above ): IBM Stratus and Cirrus supercomputers analyze Global Environmental Intelligence ( click to enlarge ) ]

Earth Event Alerts

by, Kentron Intellect Research Vault [ E-MAIL: KentronIntellectResearchVault@Gmail.Com ]

August 17, 2012 19:00:42 ( PST ) Updated ( Originally Published: March 23, 2011 )

MARYLAND, Fort George G. Meade – August 17, 2012 – IBM Stratus and IBM Cirrus supercomputers as well as CRAY XK6m and CRAY XT5 ( Jaguar ) massive parallel supercomputers and vector supercomputers are securely controlled via the U.S. National Security Agency ( NSA ) for analyzing Global Environmental Intelligence ( GEI ) data extracted from ground-based ( terrestrial ) monitoring stations and space-based ( extraterrestrial ) spaceborne platforms studying Earth Event ( Space Weather ) effects via High-Performance Computing ( HPC ) as well as, for:

– Weather Forecasting ( including: Space Weather ); – U.S. Government Classified Projects; – Scientific Research; – Design Engineering; and, – Other Research.

[ IMAGE ( above ): CRAY XK6m supercomputers analyze Global Environmental Intelligence ( click to enlarge ) ]

CRAY INC. largest customers are U.S. government agencies, e.g. the U.S. Department of Defense ( DoD ) Defense Advanced Research Projects Agency ( DARPA ) and the U.S. Department of Energy ( DOE ) Oak Ridge National Laboratory ( ORNL ), which accounts for about 3/4 of revenue for CRAY INC. – as well as other supercomputers used worldwide by academic institutions ( universities ) and industrial companies ( private-sector firms ).

CRAY INC. additionally provides maintenance, support services and sells data storage products from partners ( e.g. BlueArc, LSI and Quantum ).

Supercomputer competitors, of CRAY INC., are:

– IBM; – HEWLETT-PACKARD; and, – DELL.

On May 24, 2011 CRAY INC. announced its new CRAY XK6 supercomputer, a hybrid supercomputing system combining its Gemini InterConnect, AMD Opteron™ 6200 Series processors ( code-named: InterLagos ) and NVIDIA Tesla 20 Series GPUs into a tightly integrated upgradeable supercomputing system capable of more than 50 petaflops ( i.e. ‘quadrillions of computing operations’ per ‘second’ ), a multi-purpose supercomputer designed for the next-generation of many-core High Performance Computing ( HPC ) applications.

The SWISS NATIONAL SUPERCOMPUTING CENTRE ( CSCS ) – located in Manno, Switzerland – is the CRAY INC. first ( 1st ) customer for the new CRAY XK6 system. CSCS ( Manno, Switzerland ) promotes and develops technical and scientific services in the field of High-Performance Computing ( HPC ) for the Swiss research community, and is upgrading its CRAY XE6m system ( nick-named: Piz Palu ) into a multiple cabinet new CRAY XK6 supercomputer. The SWISS NATIONAL SUPERCOMPUTING CENTRE ( CSCS ) supports scientists working, in:

– Weather Forecasting; – Physics; – Climatology; – Geology; – Astronomy; – Mathematics; – Computer Sciences; – Material Sciences; – Chemistry; – Biology; – Genetics; and, – Experimental Medicine.

Data additionally analyzed by these supercomputers, include:

– Ultra-Deep Sea Volcanoes located in continental plate fracture zones several miles beneath ocean basin areas ( e.g. Asia-Pacific Rim also known as the “Pacific Ring of Fire” where a circum-Pacific seismic belt of earthquakes frequently impact areas far across the Pacific Ocean in the Americas ).

Global geoscience realizes Earth ‘ground movement shaking’ earthquakes hide alot, of what people are actually walking on-top-of, large geographic land mass areas known as ‘continental shelves’ or “continental plates” that move ( tectonics ) because of superheated pressurized extrasuperconducting magnetic energy properties released from within molten magma material violently exploding beneath the surface of the Earth down in ultra-deep seas.

[ IMAGE ( above ): Global Tectonic Plate Boundaries & Major Volcano HotSpots ( click to enlarge ) ]

Significant volcanoes are positioned like dots along this global 25,000-mile circular region known as the “Pacific Ring of Fire” extending from south of Australia up the ‘entire eastcoast’ of Japan, China and the Kamchatka Pennisula of Russia to across the Aleutian Islands of Alaska and then south down the ‘entire westcoast’ of North America and Latin America.

[ IMAGE ( above ): Ultra-Deep Sea Pacific Ocean Basin ( click to enlarge ) ]

March 11, 2011 Tohoku-chiho Taiheiyo-oki Japan 9.0 earthquake held several secrets, including U.S. government contractors simultaneously monitoring a significant ”moment of magnitude” ( Mw ) Earth Event occurring parallel to the eastcoast of Japan beneath the Western Pacific Ocean where an entire suboceanic mountain range was being split in-half ( south to north ) 310-miles long and split open 100-feet wide ( east to west ), which the public was unaware of nor were they told details about.

Interestingly, the March 11, 2011 Japan island earthquakes have not yet stopped, as the swarm of 4.0, 5.0, 6.0 and 7.0 Richter scale earthquakes continue as a direct and proximate cause of erupting ‘suboceanic volcanoes‘ moving these large “plates” beginning to force yet others to slam into one another thousands of miles away.

Japan’s Western Pacific Ocean ‘eastcoast’ has a ’continental plate’ slamming point meeting the ’westcoast’ of North America near the Cascade mountain range ‘plate’ reacting in one ( 1 ) of two ( 2 ) ways, i.e. ’seaward’ ( plate thrusting toward Japan ) or ‘landward’ ( plate thrusting toward the Pacific Northwest ) of the United States and/or Canada.

What The Public Never Knew

Government leadership, globally, is acutely familiar with these aforementioned types of major Earth Events, including ‘monstrous plate tectonic pushing matches’, which usually collapse one or more ‘national infrastructures’ and typically spells ‘death’ and ‘serious injuries’ for populations in developed areas.

Extremely familiar with mass public panic resulting from Earth Event catastrophes, government ‘contingency actions’ pre-approved by ‘governing bodies’ and/or ‘national leadership’ Executive Order Directives, which although not advertised is a matter of public record, ‘immediately calls’ upon ‘all military forces’ to carry-out “risk reduction” ( ‘minimization of further damages and dangers’ ) through what is referred to as “mitigation” ( ‘disaster management’ ) within “National Disaster Preparedness Planning” ( ‘national contingency measures’ ) details citizens are unaware-of. Government decision-makers know a “national emergency can bring temporary suspension of Constitutional Rights and a loss of freedoms – a volatile subject few care to discuss because ’any significant natural disaster’ will result in government infringment on many civil liberties most populations are accustomed to enjoying.

Before 1-minute and 40-seconds had passed into the March 11, 2011 Tohoku, Japan earthquake ( Richter scale: M 9.0 ), key U.S. government decision-makers discussed the major Earth Event unfolding off Japan’s eastcoast Plate-Boundary subduction zone beneath the ultra-deep sea of the Western Pacific Ocean where Japan’s monstrous volcano mountain range had split at least 100-feet wide open and cracked 310-miles long in a northern direction headed straight for the Aleutian Islands of Alaska in the United States.

U.S. Military Contingent Standby “Red Alert” Notification

U.S. Air Force ( USAF ) ‘subordinate organization’ Air and Space Operations ( ASO ) Communications Directorate ( A6 ) ‘provides support’ over ‘daily operations’, ‘contingency actions’ and ‘general’ Command, Control, Communication and Computer Intelligence ( C4I ) for the U.S. Air Force Weather Agency ( USAFWA ) saw its 1st Weather Group ( 1ST WXG ) Directorate ready its USAFWAWXGOWS 25th Operational Weather Squadron ( OWS ) at Davis-Monthan Air Force Base ( Tucson, Arizona ) responsibile for conjuntive communication notification issuance of an Earth Event “Red Alert” immediately issued directly to U.S. Army Western Command ( WESTCOM ) with a “Standby-Ready” clause pausing Western Region ( CONUS ) mobilization of Active, Reserve and National Guard military forces at specific installations based on the Japan Earth Event “moment of magnitude” ( Mw ) Plate-Boundary consequential rebound expected to strike against the North America westcoast Plate-Boundary of the Cascadia Range reactionarily triggering its subduction zone into a Cascadia ‘great earthquake’.

CALTECH Public News Suppression Of Major Earth Event

Officials, attempting to diminish any clear public understanding of the facts only knowing a Richter scale level earthquake ‘magnitude’ ( never knowing or hearing about what a major Earth Event “moment of magnitude” ( Mw ) entailed ), only served-up ‘officially-designed double-speak psycho-babble terms’ unfamiliar to the public as ‘creative attention distraction’ announcing the “Japan earthquake experienced,” a:

– “Bilateral Rupture;” and,

– “Slip Distribution.”

The facts are that, the Japan ‘earthquake’ would ‘never have occurred’, ‘unless’:

1ST – “Bilateral Rupture” ( ‘suboceanic subterranean tectonic plate split wide open  ) occurred; followed by,

2ND – “Slip Distribution” ( ‘tectonic plate movement’ ); then finally,

3RD – “Ground Shaking” ( ‘earthquake’ ) response.   Officials failed the public without any notification a major Earth Event “moment of magnitude” ( Mw ) on the “Pacific Ring of Fire” ( circum-Pacific seismic belt ) in the Western Pacific Ocean had a, huge:

1. Continental Plate Break Off;

3. Undersea Plate Mountain Range Crack Wide Open; plus,

2. Mountain Range Split Open 310-Miles Long.

There are some, laying at rest, that might ‘not consider’ the aforementioned three ( 3 ) major Earth Event occurences significant, except those ‘still living’ on Earth.

Asia-Pacific Rim

This western Pacific Ocean huge ‘undersea mountain range’ moved ‘east’, crushing into the smaller portion of its tectonic plate’ toward the continent of Asia, which commenced continous streams of day and night significant earthquakes still registering 5.0 + and 6.0 + according to Richter scale levels of magnitude now and for over 12-days throughout the area surrounding Japan, the point nearest where the tectonic plate meets the continent of Asia within the western Pacific Ocean from where this ‘monstorous undersea mountain range’ suddenly split, sending the ‘eastern half’ – with the ‘tectonic plate’ broken beneath it – slamming into the continent of Asia.

Simultaneously pushed, even greater with more force outward ( note: explosives – like from out-of a cannon or from a force-shaped explosive – project blasts outward from the ‘initial explosive blast’ is blunted by a back-stop ) away-from the Asia continent, was this ‘monstorous undersea mountain range’ split-off ( 310-miles / 500-kilometers long ) ‘western half’ slammed west up against the Americas ‘western tectonic plates’ .

This ‘is’ the ‘major’ “Earth Event” that will have consequential global impact repurcussions, ‘officially minimized’ by ‘focusing public attention’ on a ‘surface’ Earth Event earthquake 9.0 Richter scale magnitude ( once ), while even further diminishing the hundreds of significant earthquakes that are still occuring 12-days after the initial earthquake.

Asia-Pacific Rim “Ring Of Fire”

Many are unaware the “Asia-Pacific Rim” is ( also known as ) the “Ring of Fire” whereunder the ”ultra-deep sea Pacific Ocean’ exists ‘numerous gigantic volatile volcanoes’ positioned in an ‘incredibly large circle’ ( “ring” ) around a ‘huge geographic land mass area’ comprised of ‘tectonic plates’ that ‘connect’ the ‘Eastern Asias’ to the ‘Western Americas’.

Yellowstone National Park Super Volcano

Many people are still wondering ‘why’ the Japan earthquakes have not yet stopped, and why they are being plagued by such a long swarm of siginificant earthquakes still to this very day nearly 60-days later. The multiple color video clips viewed ( below ) provides information on unusual earthquake swarm patterns and reversals while studying the World’s largest supervolcano in Wyoming ( USA ) located at Yellowstone National Park, a global public attraction viewing natural underground volcano steam vents known as geyser eruptions:

[ PHOTO ( above ): Major HotSpot at Yellowstone displays Half Dome cap of granite rock above unerupted volcano magma. ]

Ultra-Deep Sea Volcanoes

When huge undersea volcanoes erupt they dynamically force incredibly large geographic land mass plates to move whereupon simultaneously and consequentially movement is experienced on ‘surface land areas’ people know as ’earthquakes’ with their ’aftermath measurements’ provided in “Richter scale level” measurements that most do not understand. These Richter scale measurements are only ‘officially provided estimates’, as ’officials are never presented with totally accurate measurements’ because many of which are ‘not obtained with any great precision for up-to 2-years after the initial earthquake’.

Rarely are ‘precise measurements’ publicly provided, and at anytime during that 2-year interim the public may hear their previously reported earthquake Richter scale level measurement was either “officially upgraded” or “officially downgraded.” Often, this is apparently dependent when one sees ’many other countries contradicting U.S. public news announcements’ about the magnitude of a particularly controversial earthquake. An example of this was seen surrounding the March 12, 2011 earthquake in Japan:

– Japan 1st public announcement: 9.2 Richter scale;

– United States 1st public announcement: 8.9 Richter scale;

– United States 2nd public announcement: 9.0 Richter scale; and,

– United States 3rd public announcement: 9.1 Richter scale.

What will the March 12, 2011 Japan earthquake be officially reported as in 2-years? Who knows?

Never publicly announced, however are measurements of an earthquake ‘force strength pressure accumulation’ transmitted through suboceanic tectonic plates grinding against one another, a major Earth Event ‘geographic pushing process’, having been seen by U.S. NSA supercomputers from global ground and space-based monitoring analysis surrounding the “Asia-Pacific Rim Ring of Fire” – stretching from the ‘Eastern Asias’ to the ‘Western Americas’ and beyond.

This ‘domino plate tectonic principle’ results from combined amounts of ‘volcanic magmatic eruptive force strength’ and ‘tectonic plate accumulative pressure build-up’ against ‘adjacent tectonic plates’ causing ‘suboceanic, subterranean and surface land to move’ whereupon ‘how significant such amounts occur determines strength’ of both consequential ‘earthquakes’ and resultant ‘tsunamis’.

Waterway Tsunamis

When most of the public ‘hears about’ a “tsunami”, they ‘think’ ‘high waves’ near “ocean” coastal regions presented with significant floods over residents of cities nearby. Few realize the ‘vast majority of Earth’s population predominantly live all along ocean coastal regions. Few realize one ( 1 ) ‘gigantic tsunami’ could ‘kill vast populations living near oceans in the wake of popular beaches, a tough trade-off for some while logical others choose ‘living further inland’ – away from large bodies of water like ‘large lakes’ where ‘tide levels are also effected by the gravitational pull of the moon’ that can also can a ‘vast deep lake body’ bring a tsunami dependent on which direction tectonic plates move a ‘force directionalized earthquake’ creating a ‘tsunami’ with significant innundating floods over residents living in cities near those ‘large shoreline’ areas too.

What most of the public does not yet fully realize is that ‘large river bodies of water’, like the Mississippi River that is a ‘north’ to ‘south’ directional river’ could easily see ‘east to ‘west’ directional ‘tectonic plates’ move adjacent states – along the New Madrid Fault subduction zone – with significant ‘earthquakes’ – from tectonic plate movement easily capable of squeezing the side banks of even the Missippi River forcing huge amounts of water hurled out onto both ‘east’ and ‘west’ sides resulting in ‘seriously significant inland flooding’ over residents living in ‘low lying’ states of the Central Plains of the United States.

Japan “Pacific Ring Of Fire” Earthquakes To Americas Cascadia Fault Zone

Japan accounts, of a co-relative tsunami, suggest the Cascadia Fault rupture occurred from one ( 1 ) single earthquake triggering a 9-Mw Earth Event on January 26, 1700 where geological evidence obtained from a large number of coastal northern California ( USA ) up to southern Vancouver Island ( Canada ), plus historical records from Japan show the 1,100 kilometer length of the Cascadia Fault subduction zone ruptured ( split cauding that earthquake ) major Earth Event at that time. While the sizes of earlier Cascadia Fault earthquakes are unknown, some “ruptured adjacent segments” ( ‘adjacent tectonic plates’ ) within the Cascadia Fault subduction zone were created over periods of time – ranging from as little as ‘hours’ to ‘years’ – that has historically happened in Japan.

Over the past 20-years, scientific progress in understanding Cascadia Fault subduction zone behavior has been made, however only 15-years ago scientists were still debating whether ‘great earthquakes’ occured at ‘fault subduction zones’. Today, however most scientists realize ‘great earthquakes’ actually ‘do occur in fault subduction zone regions’.

Now, scientific discussions focus on subjects, of:

– Earth crust ‘structural changes’ when a “Plate Boundary” ruptures ( splits ) – Related tsunamis; – Seismogenic zone ( tectonic plate ‘locations’ and ‘width sizes’ ).

Japan America Earthquakes And Tsunamis Exchange

Great Cascadia earthquakes generate tsunamis, which most recently was at-least a ’32-foot high tidal wave’ onto the Pacific Ocean westcoast of Washington, Oregon, and California ( northern portion of state ), and that Cascadia earthquake tsunami sent a consequential 16-foot high todal wave onto Japan.

These Cascadia Fault subduction zone earthquake tsunamis threaten coastal communities all around the Pacific Ocean “Ring of Fire” but have their greatest impact on the United States westcoast and Canada being struck within ’15-minutes’ to ’40-minutes’ shortly ‘after’ a Cascadia Fault subduction zone earthquake occurs.

Deposits, from past Cascadia Fault earthquake tsunamis, have been identified at ‘numerous coastal sites’ in California, Oregon, Washington, and even as far ‘north’ as British Columbia ( Canada ) where distribution of these deposits – based on sophisticated computer software simulations for tsunamis – indicate many coastal communities in California, Oregon, Washington, and even as far ‘north’ as British Columbia ( Canada ) are well within flooding inundation zones of past Cascadia Fault earthquake tsunamis.

California, Oregon, Washington, and even as far ‘north’ as British Columbia ( Canada ) westcoast communities are indeed threatened by future tsunamis from Cascadia great earthquake event ‘tsunami arrival times’ are dependent on measuring the distance from the ‘point of rupture’ ( tectonic plate split, causing earthquake ) – within the Cascadia Fault subduction zone – to the westcoast “landward” side.

Cascadia Earthquake Stricken Damage Zone Data

Strong ground shaking from a “moment of magnitude” ( Mw ) “9″ Plate-Boundary earthquake will last 3-minutes or ‘more’, dominated by ‘long-periods of further earthquakes’ where ‘ground shaking movement damage’ will occur as far inland as the cities of Portland, Oregon; Seattle, Washington; and Vancouver, British Columbia ( Canada ).

Tsunami Optional Wave Patterns “Following Sea”

Large cities within 62-miles to 93-miles of the nearest point of the Cascadia Plate-Boundary zone inferred rupture, will not only experience ‘significant ground shaking’ but also experience ‘extreme duration ground shaking’ lasting far longer, in-addition to far more powerful tsunamis carrying far more seawater because of their consequential “lengthened  wave periods” ( ‘lengthier distances’ between ‘wave crests’ or ‘wave curls’ ) bringing inland akin to what fisherman describe as a deadly “following sea” ( swallowing everything within an ‘even more-so powerful waterpath’ ), the result of which inland causes ‘far more significant damage’ on many ‘tall buildings’ and ‘lengthy structures’ where ‘earthquake magnitude strength’ will be felt ‘strongest’ – all along the United States of America Pacific Ocean westcoast regional areas – experiencing ‘far more significant damage’.Data Assessments Of Reoccuring Cascadia Earthquakes

cascadia ‘great earthquakes’ “mean recurrence interval” ( ‘time period occuring between one earthquake with the next earthquake ) – specific ‘at the point’ of the Cascadia Plate-Boundary – time is between 500-years up-to 600-years, however Cascadia Fault earthquakes in the past have occurred well within the 300-year interval of even less time since the Cascadia ‘great earthquake’ of the 1700s. Time intervals, however between ‘successive great earthquakes’ only a few centuries up-to 1,000 years has little ‘well-measured data’ as to ‘reoccurance interval’ because the numbers of recorded Cascadia earthquakes have rarely measured over ‘five’ ( 5 ). Data additionally indicates Cascadia earthquake intermittancy with irregular intervals when they did occur, plus data lacks ‘random distribution’ ( ‘tectonic plate shift’ or ‘earth movement’ ) or ‘cluster’ of these Cascadia earthquakes over a lengthier period of time so ‘more accurate assessments are unavailable’ for knowning anything more about them. Hence, because Cascadia earthquake ‘recurrence pattern’ is so ‘poorly known’, knowing probabilities of the next Cascadia earthquake occurrence is unfortunately unclear with extremely sparse ‘interval information’ details.

Cascadia Plate-Boundary “Locked” And “Not Locked”

Cascadia Plate-Boundary zone is ‘currently locked’ off the U.S. westcoast shoreline where it has accumulating plate tectonic pressure build-up – from other tectonic plates crashing into it for over 300-years.

The Cascadia Fault subduction zone, at its widest point, is located northwest just off the coast of the State of Washington where the maximum area of seismogenic rupture is approximately 1,100 kilometers long and 50 kilometers up-to 150 kilometers wide. Cascadia Plate-Boundary seismogenic portion location and size data is key-critical for determining earthquake magnitude, tsunami size, and the strength of ground shaking.

Cascadia Plate-Boundary “landward limit” – of only its “locked” portion – where ‘no tectonic plate shift has yet to occur’ is located between the Juan de Fuca tectonic plate and North America tectonic plate were it came to be “locked” between Cascadia earthquakes, however this “ocked” notion has only been delineated from ‘geodetic measurements’ of ‘surface land deformation’ observations. Unfortunately, its “seaward limit” has ‘very few constraints’ up-to ‘no constraints’ for travelling – on its so-called “locked zone” portion – that could certainly move at any time.

Cascadia Plate Continues Sliding

Cascadia transition zone, separating its “locked zone” from its “continuous sliding zone” headed east into the continent of North America, is constrained ( held-back ) poorly so, Cascadia rupture may extend an unknown distance – from its now “locked zone” to its “continously sliding transition zone.”

On some Earth crust faults, near coastal regions, earthquakes may also experience ‘additional Plate-Boundary earthquakes’, ‘increased tsunami tidal wave size’ plus ‘intensification of local area ground shaking’.

Earth Event Mitigation Forces Global Money Flow

Primary ‘government purpose’ to ‘establishing international’ “risk reduction” is solely to ‘minimize global costs from damages’ associated with major magnitude Earth Events similar-to but even-greater than the what happend on March 11, 2011 all over Japan.

Historical earthquake damages assist in predictive projections of damage loss studies suggesting disastrous future losses will occur in the Pacific Northwest from a Cascadia Fault subduction ‘great earthquake’. National ‘loss mitigation efforts’ – studying ‘other seismically active regions’ plus ‘national cost-benefit studies’ indicate that ‘earthquake damage loss mitigation’ may effectively ‘reduce losses’ and ‘assist recovery’ efforts in the future. Accurate data acquired, geological and geophysical research and immediate ‘technological information transfer’ to ‘national key decision-makers’ was to reduce Pacific Northwest Cascadia Fault subduction zone additional risks to those of the Western North America coastal region.

Damage, injuries, and loss of life from the next great earthquake from the Cascadia Fault subduction zone will indeed be ‘great’, ‘widespread’ and ‘significantly ‘impact national economies’ ( Canada and United States ) for years to decades in the future, which has seen a global concerted increase, in:

– International Cooperative Research; – International Information Exchanges; – International Disaster Prepardeness; – International Damage Loss Mitigation Planning; – International Technology Applications; and, – More.

Tectonics Observatory

CALTECH Advanced Rapid Imaging and Analysis ( ARIA ) Project collaborative members of the NASA Jet Propulsion Laboratory ( JPL ), University of California Institute of Technology ( Pasadena ) Tectonics Observatory ARIA Project members, CALTECH scientists, Shengji Wei and Anthony Sladen ( of GEOAZUR ) modelled the Japan Tohoku earthquake fault zone sub-surface ( below surface ) ‘tectonic plate movement’, dervived from:

– TeleSeismic Body Waves ( long-distance observations ); and,

– Global Positioning Satellites ( GPS ) ( near-source observations ).

A 3D image of the fault moving, can be viewed in Google Earth ( internet website webpage link to that KML file is found in the “References” at the bottom of this report ) projects that fault rupture in three dimensional images, which can be viewed from any point of reference, with ‘that analysis’ depicting the rupture ( ground splitting open 100-feet ) resulting in the earthquake ( itself ) ‘triggered from 15-miles ( 24-kilometers ) beneath the ultra-deep sea of the Western Pacific Ocean, with the ‘entire island of Japan being moved east’ by 16-feet ( 5 meters ) from its ‘before earthquake location’.

[ IMAGE ( above ): NASA JPL Project ARIA Tectonic Plate Seismic Wave Direction Map ( click image to enlarge and read ) ]

National Aeronautics and Space Administration ( NASA ) Jet Propulsion Laboratory ( JPL ) at the University of California ( Pasadena ) Institute of Technology ( also known as ) CALTECH Project Advanced Rapid Imaging and Analysis ( ARIA ) used GEONET RINEX data with JPL GIPSY-OASIS software to obtain kinematic “precise point positioning solutions” from a bias fixing method of a ‘single station’ matched-up to JPL orbit and clock products to produce their seismic displacement projection map details that have an inherent ’95% error-rating’ that is even an ‘estimate’, which ‘proves’ these U.S. government organization claims that ‘all they supposedly know’ ( after spending billions of dollars ) are what they are ‘only willing to publicly provide may be ‘only 5% accurate’. So much for what these U.S. government organizations ‘publicly announce’ as their “precise point positioning solutions.”

Pay Any Price?

More ‘double-speak’ and ‘psycho-babble’ serves to ‘only distract the public away from the ‘truth’ as to ‘precisely what’ U.S. taxpayer dollars are ‘actually producing’, and ‘now knowing this’ if ‘those same officials’ ever ‘worked for a small business’ they would either be ‘arrested’ for ‘fraud’ or ‘fired’ because of ‘incompetence’, however since ‘none of them’ will ever ‘admit to their own incometence’ their ‘leadership’ needs to see ‘those responsible’ virtually ‘swing from’ the end of an ‘incredibly long U.S. Department of Justice rope’.

Unfortunately, the facts surrounding all this only get worse.

[ IMAGE ( above ): Tectonic Plates ( brown color ) Sinking and Sunk On Earth Core. ]

Earthquake Prediction Falacy

Earthquake prediction will ‘never be an accomplished finite science for people to ever rely upon’, even though huge amounts of money are being wasted on ‘technology’ for ‘detection sensors’ reading “Seismic Waveforms” ( also known as ) “S Waves” that ‘can be detected and stored in computer databases’, because of a significant fact that will never be learned no matter how much money or time may be devoted to trying to solve the unsolvable problem of the Earth’s sub-crustal regions that consist primarily of ‘molten lake regions’ filled with ‘floating tectonic plates’ that are ‘moving while sinking’ that ‘cannot be tested’ for ‘rock density’ or ‘accumulated pressures’ existing ‘far beneath’ the ‘land surface tectonic plates’.

The very best, and all, that technology can ever perform for the public is to record ‘surface tectonic plates grinding aganist one another’ where ‘only that action’ ( alone ) does in-fact emit the generation of upward ‘accoustic wave form patterns’ named as being ‘seismic waves’ or ‘s-waves’ that ‘do occur’ but ‘only when tectonic plates are moving’.

While a ‘public early warning’ might be helpful for curtailing ‘vehicular traffic’ crossing an ‘interstate bridge’ that might collapse or ‘train traffic’ travel being stopped, thousands of people’s lives could be saved but it would fail to serve millions more living in buildings that collapse.

Early Warning Exclusivity

Knowing governments, using publicly unfamiliar terms, have ‘statisticly analyzed’ “international economics” related to “national infrastructure preparedness” ( ‘early warning systems’ ) – both “public” ( i.e. ‘utility companies’ via ‘government’ with ‘industrial leadership’ meetings ) and “private” ( i.e. ‘residents’ via ‘television’, ‘radio’, ‘newspaper’ and ‘internet’ only ‘commercial advertisements’ ) between which two ( 2 ) sees “national disaster mitigation” ‘primary designated provisions’ for “high density population centers” near “coastal or low-lying regions” ( ‘large bodies of ocean, lake and river water’ ) “early warning” but for only one ( 1 ) being “public” ( i.e. ‘utility companies’ via ‘government’ with ‘industrial leadership’ meetings ) “in the interest of national security” limiting ‘national economic burdens’ from any significant Earth Event impact ‘aftermath’.

In short, and without all the governmentese ‘psycho-babble’ and double-speak’, costs continue being spent on ‘high technology’ efforts to ‘perfect’ a “seismic early warning” for the “exclusive use” ( ‘national government control’ ) that “provides” ( ‘control over’ ) “all major utility company distribution points” ( facilities from where ‘electrical power is only generated’ ) able to “interrupt power” ( ‘stop the flow of electricity nationwide’ from ‘distribution stations’ ), thus “saving additional lives” from “disasterous other problems” ( ‘aftermath loss of lives and injuries’ caused by ‘nuclear fallout radiation’, ‘exploding electrical transformers’, and ‘fires associated with overloaded electrical circuits’ ).

Logically, ‘much’ – but ‘not all’ – of the aforementioned ‘makes perfect sense’, except for “John Doe” or “Jane Doe” ‘exemplified anonomously’ ( herein ) as individuals whom if ‘earlier warned’ could have ‘stopped their vehicle ‘before crossing the bridge that collapsed’ or simply ‘stepped out of the way of a huge sign falling on them’ being ‘killed’ or ‘maimed’, however one might additionally consider ‘how many more would ‘otherwise be killed or maimed’ after an ‘ensuing mass public mob panics’ by ‘receiving’ an “early warning.” Tough call for many, but few.

Earth Data Publicly Minimized

Tohoku-oki earthquake ‘seismic wave form data’ showing the Japan eastcoast tectonic plate “bilaterally ruptured” ( split in-half for a distance of over 310-miles ) was obtained from the USArray seismic stations ( United States ) was analyzed and later modelled by Caltech scientists Lingsen Meng and Jean-Paul Ampuero whom created preliminary data animation demonstrating a ‘super major’ Earth Event simultaneously occurring when the ‘major’ earthquake struck Japan.

U.S. National Security Stations Technology Systems Projects

United States Seismic Array ( USArray ) Data Management Plan Earthscope is composed of three ( 3 ) Projects:

1. Incorporated Research Institutions for Seismology ( IRIS ), a National Science Foundation ( NSF ) consortium of universities, Data Management Center ( DMC ) is ‘managed’ by the “United States Seismic Array ( USArray )” Project;

2. UNAVCO INC. ‘implemented’ “Plate-Boundary Observatory ( PBO )” Project; and,

3. U.S. Geological Service ( USGS ) ‘operated’ “San Andreas Fault Observatory at Depth ( SAFOD )” Project at Stanford University ( California ).

Simultaneous Earth Data Management

USArray component “Earthscope” data management plan is held by USArray IRIS DMC.

USArray consists of four ( 4 ) data generating components:

Permanent Network

Advanced National Seismic System ( ANSS ) BackBone ( BB ) is a joint effort – between IRIS, USArray and USGS – to establish a ‘Permanent Network’ of approximately one-hundred ( 100 ) Earth monitoring ‘receiving stations’ ( alone ) located in the Continental United States ( CONUS ) or lowere 48 states of America, in-addition to ‘other stations’ located in the State of Alaska ( alone ).

Earth Data Multiple Other Monitors

USArray data contribution to the Advanced National Seismic System ( ANSS ) BackBone ( BB ) consists, of:

Nine ( 9 ) new ‘international Earth data accumulation receiving stations’ akin to the Global Seismic Network ( GSN );

Four ( 4 ) “cooperative other stations” from “Southern Methodist University” and “AFTAC”;

Twenty-six ( 26 ) ‘other receiving stations’ from the Advanced National Seismic System ( ANSS ) with ‘upgrade funding’ taken out-of the USArray Project “EarthScope;” plus,

Sixty ( 60 ) additional stations of the Advanced National Seismic System ( ANSS ) BackBone ( BB ) network that ‘currently exist’, ‘will be installed’ or ‘will be upgraded so that ‘data channel stream feeds’ can and ‘will be made seamlessly available’ through IRIS DMC where ‘data can be continuously recorded’ at forty ( 40 ) samples per second and where 1 sample per second can and ‘will be continously transmitted in real-time back into IRIS DMC where quality assurance is held at facilities located in ‘both’ Albuquerque, New Mexico and Golden, Colorado with ‘some’ U.S. Geological Survey ( USGS ) handling ‘some operational responsiblities’ thereof.

Albuquerque Seismological Laboratory ( ASL ) –

Albuquerque Seismological Laboratory ( ASL ) supports operation and maintenance of seismic networks for the U.S. Geological Survey ( USGS ) portion of the Global Seismographic Network ( GSN ) and Advanced National Seismic System ( ANSS ) Backbone network.

ASL runs the Advanced National Seismic System ( ANSS ) depot facility supporting the Advanced National Seismic System ( ANSS ) networks.

ASL also maintains the PASSCAL Instrument Center ( PIC ) facility at the University of New Mexico Tech ( Socorro, New Mexico ) developing, testing, and evaluating seismology monitoring and recording equipment.

Albuquerque Seismological Laboratory ( ASL ) staff are based in ‘both’ Albuquerque, New Mexico and Golden, Colorado.

Top-Down Bottom-Up Data Building Slows Earthquake Notifications

Seismic waveform ( ‘seismic Wave form frequency’ ) data is received by the Global Seismic Network ( GSN ) and Advanced National Seismic System ( ANSS ) BackBone ( BB ) network by electronic transmissions sent ‘slower than real-time’ by sending only ‘near-time data’ ( e.g. tape and compact disc recordings ) to the National Earthquake Information Center ( NEIC ) ‘station’ of the U.S. Geological Survey ( USGS ) ‘officially heralded’ for so-called “rapid earthquake response,”

Unbelieveably is the fact that in-addition to the aforementioned ‘slow Earth Event data delivery process’, an additional number of ‘data receiving stations’ have absolutely ‘no data streaming telemetry’ transmission capabilities whatsoever so, those station data recordings – on ‘tapes’ and ‘compact discs’ – are delivered by ‘other even more time consuming routes’ before that data can even reach the U.S. Geological Survey ( USGS ). In short, all the huge amounts of money being spent goes to ‘increasing computer technologies, sensors, satellites, ‘data stream channel networks’ and ‘secure facility building stations’ from the ‘top, down’ instead of building ‘monitoring stations’ and ‘recording stations’ from the ‘bottom, up’ until the entire earthquake monitoring and notification system is finally built properly. As it curreently stands, the ‘apple cart stations continue being built more and more’ while ‘apple tree stations are not receiving the proper technological nutrients’ to ‘delivery apples ( ‘data’ ) and ‘fed into notification markets’ ( ‘public’ ) where all this could do some good.

U.S. National Security Reviews Delay Already Slow Earthquake Notifications

IRIS Data Management Center ( DMC ) – after processing all incoming data streams from reporting stations around the world – then distributes seismic waveform data ‘back to’ both the Global Seismic Network ( GSN ) and Advanced National Seismic System ( ANSS ) BackBone ( BB ) network operations, but only ‘after seismic waveform data has been ‘thoroughly screened’ by what U.S. national security government Project leadership has deemed its ‘need to control all data’ by “limiting requirements” ( ‘red tape’ ) because ‘all data must undergo’ a long ardous ‘secure data clearing process’ before any data can be released’. Amusingly to some, the U.S. government – in its race to create another ‘official acronym’ of ‘double-speak’ – that national security requirement clearing process’ was ever so aptly named:

“Quality Assurance Framework” ( QUACK )

Enough said.

Let the public decide what to do with ‘those irresponsible officials’, afterall ‘only mass public lives’ are ‘swinging in the breeze’ at the very end-of a now-currently endless ‘dissinformation service rope’ being paid for by the tax-paying public.

In the meantime, while we are all ‘waiting for another Earth Event to take place far beyond, what ( besides this report ) might ‘slap the official horse’, spurring it to move quickly?

How about us? What shhould we do? Perhaps, brushing-up on a little basic knowledge might help.

Inner Earth Deeper Structure Deep Focus Earthquakes Rays And Related Anomalies

There is no substitute for knowledge, seeing information technology ( IT ) at the focal point of many new discoveries aided by supercomputing, modelling and analytics, but common sense does pretty good.

The following information, although an incredibily brief overview on such a wide variety of information topics surrounding a great deal of the in’s and out’s surrounding planet Earth, scratches more than just the surface but deep structure and deep focus impacting a multitude of generations from as far back as 700 years before the birth of Christ ( B.C. ).

Clearly referenced “Encyclopaedia Britannica” general public access information is all second-hand observations of records from other worldwide information collection sources, such as:

– Archives ( e.g. governments, institutions, public and private );

– Symposiums ( e.g. white papers );

– Journals ( professional and technical publications );

– Other information collection sources; and,

– Other information publications.

Encyclopaedias, available in a wide variety of styles and formats, are ’portable catalogs containing a large amount of basic information on a wide variety of topics’ available worldwide to billions of people for increasing their knowledge.

Encyclopedia information formats vary, and through ’volume reading’, within:

– Paper ‘books’ with either ’printed ink’ ( sighted ) or ’embossed dots’ ( Braille );

– Plastic ‘tape cartridges’ ( ‘electromagnetic’ media ) or ‘compact discs’ ( ‘optical’ media ) with ‘electronic device display’; or,

– Electron ‘internet’ ( ‘signal computing’ via ‘satellite’ or ‘telecomputing’ via ’landline’ or ‘node’ networking ) with ‘electronic device display’.

After thoroughly reviewing the Encyclopedia Britannica ‘specific compilation’, independent review found reasonable a facsimile of the original reformatted for easier public comprehension ( reproduced further below ).

Suprisingly, after that Encyclopedia Britannica ‘specific compilation’ information was reformatted for clearer reading comprehension, otherwise inner Earth ‘deep-structure’ geophysical studies formed an amazing correlation with additional factual activities within an equally amazing date chronology of man-made nuclear fracturing reformations of Earth geology geophysical – activities documented worldwide more than 1/2 century ago but somehow forgotten; either by chance or secret circumstance.

How could the Encyclopedia Britannica, or for that matter anyone else, missed something on such a grand scale that is now so obvious?

… [ TEMPORARILY EDITED-OUT FOR REVISION PURPOSES ONLY –  ] …

For more details, about the aforementioned, Click: Here!

Or,

To understand how all this relates, ‘begin with a better basic understanding’ by continuing to read the researched information ( below ):

====

Circa: March 21, 2012

Source:  Encyclopaedia Britannica

Earthquakes

Definition, Earthquake: Sudden shaking of Earth ground caused by passage of seismic waves through Earth rocks.

Seismic waves are produced when some form of energy stored in the Earth’s crust is suddenly released, usually when masses of rock straining against one another suddenly fracture and “slip.” Earthquakes occur most often along geologic faults, narrow zones where rock masses move in relation to one another. Major fault lines of the world are located at the fringes of the huge tectonic plates that make up the Earth’s crust. ( see table of major earthquakes further below )

By the early 20th Century ( 1900s ), little was understood about earthquakes until the emergence of seismology, involving scientific study of all aspects of earthquakes, now yielding answers to long-standing questions as to why and how earthquakes occur.

About 50,000 earthquakes, large enough to be noticed without the aid of instruments, occur every year over the entire Earth, and of these approximately one-hundred ( 100 ) are of sufficient size to produce substantial damage if their centers are near human habitation.

Very great earthquakes, occur on average about once a year, however over centuries these earthquakes have been responsible for millions of human life deaths and an incalculable amount of property damage.

Earthquakes A -Z

Earth’s major earthquakes occur primarily in belts coinciding with tectonic plate margins, apparent since early ( 700 B.C. ) experienced earthquake catalogs, and now more readily discernible by modern seismicity maps instrumentally depicting determined earthquake epicentres.

Most important, is the earthquake Circum-Pacific Belt affecting many populated coastal regions around the Pacific Ocean, namely:

South America;

– North America & Alaska;

Aleutian Islands;

Japan;

New Zealand; and,

New Guinea.

80% of the energy, estimated presently released in earthquakes, comes from those whose epicentres are in the Circum-Pacific Belt belt.

Seismic activity is by no means uniform throughout the belt, and there are a number of branches at various points. Because at many places the Circum-Pacific Belt is associated with volcanic activity, it has been popularly dubbed the “Pacific Ring of Fire.”

A second ( 2nd ) belt, known as the Alpide Belt, passes through the Mediterranean region eastward through Asia and joining the Circum-Pacific Belt in the East Indies where energy released in earthquakes from the Alpide Belt is about 15%of the world total.

There are also seismic activity ‘striking connected belts’, primarily along oceanic ridges including, those in the:

Arctic Ocean;

Atlantic Ocean;

Indian Ocean ( western ); and along,

East Africa rift valleys.

This global seismicity distribution is best understood in terms of its plate tectonic setting.

Forces

Earthquakes are caused by sudden releases of energy within a limited region of Earth rocks, and apparent pressure energy can be released, by:

Elastic strain;

– Gravity;

Chemical Reactions; and / or,

– Massive rock body motion.

Of all these, release of elastic rock strain is most important because this form of energy is the only kind that can be stored in sufficient quantities within the Earth to produce major ground disturbances.

Earthquakes, associated with this type of energy release, are called: Tectonic Earthquakes.

Tectonics

Tectonic plate earthquakes are explained by the so-called elastic rebound theory, formulated by the American geologist Harry Fielding Reid after the San Andreas Fault ruptured in 1906, generating the great San Francisco earthquake.

According to Reid theory of elastic rebound, a tectonic earthquake occurs when energy strains in rock masses have accumulated ( built-up ) to a point where resulting stresses exceed the strength of the rocks where then sudden fracturing results.

Fractures propagate ( travel ) rapidly ( see speeds further below ) through the rock, usually tending in the same direction and sometimes extending many kilometres along a local zone of weakness.

In 1906, for instance, the San Andreas Fault slipped along a plane 270-miles ( 430 kilometers) long, a line alongwhich ground was displaced horizontally as much as 20-feet ( 6 meters ).

As a fault rupture progresses along or up the fault, rock masses are flung in opposite directions, and thus spring back to a position where there is less strain.

At any one point this movement may take place not at-once but rather in irregular steps where these sudden slowings and restartings give rise to vibrations that propagate as seismic waves.

Such irregular properties of fault rupture are now included in ‘physical modeling” and ‘mathematical modeling’ earthquake sources.

Earthquake Focus ( Foci )

Roughnesses along the fault are referred to as asperities, and places where the rupture slows or stops are said to be fault barriers. Fault rupture starts at the earthquake focus ( foci ), a spot that ( in many cases ) is close to being from 5 kilometers to 15 kilometers ‘under the surface where the rupture propagates ( travels )’ in one ( 1 ) or both directions over the fault plane until stopped ( or slowed ) at a barrier ( boundary ).

Sometimes, instead of being stopped at the barrier, the fault rupture recommences on the far side; at other times the stresses in the rocks break the barrier, and the rupture continues.

Earthquakes have different properties depending on the type of fault slip that causes them.

The usual ‘fault model’ has a “strike” ( i.e., direction, from north, taken by a horizontal line in the fault plane ) and a “dip” ( i.e. angle from the horizontal shown by the steepest slope in the fault ).

Movement parallel to the dip is called dip-slip faulting.

In dip-slip faults, if the hanging-wall block moves downward relative to the footwall block, it is called “normal” faulting; the opposite motion, with the hanging wall moving upward relative to the footwall, produces reverse or thrust faulting. The lower wall ( of an inclined fault ) is the ‘footwall’, and laying over the footwall is the hanging wall.

When rock masses slip past each other ( parallel to the strike area ) movement is known as strike-slip faulting.

Strike-slip faults are right lateral or left lateral, depending on whether the block on the opposite side of the fault from an observer has moved to the right or left.

All known faults are assumed to have been the seat of one or more earthquakes in the past, though tectonic movements along faults are often slow, and most geologically ancient faults are now a-seismic ( i.e., they no longer cause earthquakes ).

Actual faulting, associated with an earthquake, may be complex and often unclear whether in one ( 1 ) particular earthquake, where total energy, is being issued from a single ( 1 ) fault plane.

Observed geologic faults sometimes show relative displacements on the order of hundreds of kilometres over geologic time, whereas the sudden slip offsets that produce seismic waves may range from only several centimetres to tens of metres.

During the 1976 Tangshan earthquake ( for example ), a surface strike-slip of about 1 meter was observed along the causative fault east of Beijing, China, and later ( as another example ) during the 1999 Taiwan earthquake the Chelung-pu fault slipped vertically up to 8 meters.

Volcanism & Earthquake Movement

A separate type of earthquake is associated with volcano activity known as a volcanic earthquake.

Although likely, even in such cases, disturbance is officially believed resultant from sudden slip of rock masses adjacent a volcano being consequential release of elastic rock strain energy, however stored energy may be partially of hydrodynamic origin due heat provided by magma flowing movements ( tidal ) throughout underground reservoirs beneath volcanoes or releasing under pressure gas, but then there certainly is a clear corresponding distinction between geographic distribution of volcanoes and major earthquakes particularly within the Circum-Pacific Belt traversing ocean ridges.

Volcano vents, however, are generally several hundred kilometres from epicentres of most ‘major shallow earthquakes’, and it is believed ’many earthquake sources’ occur ‘nowhere near active volcanoes’.

Even in cases where earthquake focus occurs where structures are marked ’directly below volcanic vents’, officially there is probably no immediate causal connection between the two ( 2 ) activities where likely both may be resultant on same tectonic processes.

Earth Fracturing

Artificially Created Inductions

Earthquakes are sometimes caused by human activities, including:

– Nuclear Explosion ( large megaton yield ) detonations underground;

– Oil & Gas wells ( deep Earth fluid injections )

– Mining ( deep Earth excavations );

– Reservoirs ( deep Earth voids filled with incredibly heavy large bodies of water ).

In the case of deep mining, the removal of rock produces changes in the strain around the tunnels.

Slip on adjacent, preexisting faults or outward shattering of rock into where new cavities may occur.

In fluid injection, the slip is thought to be induced by premature release of elastic rock strain, as in the case of tectonic earthquakes after fault surfaces are lubricated by the liquid.

Large underground nuclear explosions have been known to produce slip on already strained faults in the vicinity of test devices.

Reservoir Induction

Of the various earthquake causing activities cited above, the filling of large reservoirs ( see China ) being most prominent.

More than 20 significant cases have been documented in which local seismicity has increased following the impounding of water behind high dams. Often, causality cannot be substantiated, because no data exists to allow comparison of earthquake occurrence before and after the reservoir was filled.

Reservoir-induction effects are most marked for reservoirs exceeding 100 metres ( 330 feet ) in depth and 1 cubic km ( 0.24 cubic mile ) in volume. Three ( 3 ) sites where such connections have very probably occurred, are the:

Hoover Dam in the United States;

Aswan High Dam in Egypt; and.

Kariba Dam on the border between Zimbabwe and Zambia in Africa.

The most generally accepted explanation for earthquake occurrence in such cases assumes that rocks near the reservoir are already strained from regional tectonic forces to a point where nearby faults are almost ready to slip. Water in the reservoir adds a pressure perturbation that triggers the fault rupture. The pressure effect is perhaps enhanced by the fact that the rocks along the fault have lower strength because of increased water-pore pressure. These factors notwithstanding, the filling of most large reservoirs has not produced earthquakes large enough to be a hazard.

Specific seismic source mechanisms associated with reservoir induction have been established in a few cases. For the main shock at the Koyna Dam and Reservoir in India ( 1967 ), the evidence favours strike-slip faulting motion. At both the Kremasta Dam in Greece ( 1965 ) and the Kariba Dam in Zimbabwe-Zambia ( 1961 ), the generating mechanism was dip-slip on normal faults.

By contrast, thrust mechanisms have been determined for sources of earthquakes at the lake behind Nurek Dam in Tajikistan. More than 1,800 earthquakes occurred during the first 9-years after water was impounded in this 317 meter deep reservoir in 1972, a rate amounting to four ( 4 ) times the average number of shocks in the region prior to filling.

Nuclear Explosion Measurement Seismology Instruments

By 1958 representatives from several countries, including the United States and the Russia Soviet Union government, met to discuss the technical basis for a nuclear test-ban treaty where amongst matters considered was feasibility of developing effective means to detect underground nuclear explosions and to distinguish them seismically from earthquakes.

After that conference, much special research was directed to seismology, leading to major advances in seismic signal detection and analysis.

Recent seismological work on treaty verification has involved using high-resolution seismographs in a worldwide network, estimating the yield of explosions, studying wave attenuation in the Earth, determining wave amplitude and frequency spectra discriminants, and applying seismic arrays. The findings of such research have shown that underground nuclear explosions, compared with natural earthquakes, usually generate seismic waves through the body of the Earth that are of much larger amplitude than the surface waves. This telltale difference along with other types of seismic evidence suggest that an international monitoring network of two-hundred and seventy ( 270 ) seismographic stations could detect and locate all seismic events over the globe of magnitude 4.0 and above ( corresponding to an explosive yield of about 100 tons of TNT ).

Earthquake Effects

Earthquakes have varied effects, including changes in geologic features, damage to man-made structures, and impact on human and animal life. Most of these effects occur on solid ground, but, since most earthquake foci are actually located under the ocean bottom, severe effects are often observed along the margins of oceans.

Surface Phenomena

Earthquakes often cause dramatic geomorphological changes, including ground movements – either vertical or horizontal – along geologic fault traces; rising, dropping, and tilting of the ground surface; changes in the flow of groundwater; liquefaction of sandy ground; landslides; and mudflows. The investigation of topographic changes is aided by geodetic measurements, which are made systematically in a number of countries seriously affected by earthquakes.

Earthquakes can do significant damage to buildings, bridges, pipelines, railways, embankments, and other structures. The type and extent of damage inflicted are related to the strength of the ground motions and to the behaviour of the foundation soils. In the most intensely damaged region, called the meizoseismal area, the effects of a severe earthquake are usually complicated and depend on the topography and the nature of the surface materials. They are often more severe on soft alluvium and unconsolidated sediments than on hard rock. At distances of more than 100 km (60 miles) from the source, the main damage is caused by seismic waves traveling along the surface. In mines there is frequently little damage below depths of a few hundred metres even though the ground surface immediately above is considerably affected.

Earthquakes are frequently associated with reports of distinctive sounds and lights. The sounds are generally low-pitched and have been likened to the noise of an underground train passing through a station. The occurrence of such sounds is consistent with the passage of high-frequency seismic waves through the ground. Occasionally, luminous flashes, streamers, and bright balls have been reported in the night sky during earthquakes. These lights have been attributed to electric induction in the air along the earthquake source.

Tsunamis

Following certain earthquakes, very long-wavelength water waves in oceans or seas sweep inshore. More properly called seismic sea waves or tsunamis ( tsunami is a Japanese word for “harbour wave” ), they are commonly referred to as tidal waves, although the attractions of the Moon and Sun play no role in their formation. They sometimes come ashore to great heights—tens of metres above mean tide level—and may be extremely destructive.

The usual immediate cause of a tsunami is sudden displacement in a seabed sufficient to cause the sudden raising or lowering of a large body of water. This deformation may be the fault source of an earthquake, or it may be a submarine landslide arising from an earthquake.

Large volcanic eruptions along shorelines, such as those of Thera (c. 1580 bc) and Krakatoa (ad 1883), have also produced notable tsunamis. The most destructive tsunami ever recorded occurred on December 26, 2004, after an earthquake displaced the seabed off the coast of Sumatra, Indonesia. More than 200,000 people were killed by a series of waves that flooded coasts from Indonesia to Sri Lanka and even washed ashore on the Horn of Africa.

Following the initial disturbance to the sea surface, water waves spread in all directions. Their speed of travel in deep water is given by the formula (√gh), where h is the sea depth and g is the acceleration of gravity.

This speed may be considerable—100 metres per second ( 225 miles per hour ) when h is 1,000 metres ( 3,300 feet ). However, the amplitude ( i.e., the height of disturbance ) at the water surface does not exceed a few metres in deep water, and the principal wavelength may be on the order of hundreds of kilometres; correspondingly, the principal wave period—that is, the time interval between arrival of successive crests—may be on the order of tens of minutes. Because of these features, tsunami waves are not noticed by ships far out at sea.

When tsunamis approach shallow water, however, the wave amplitude increases. The waves may occasionally reach a height of 20 to 30 metres above mean sea level in U- and V-shaped harbours and inlets. They characteristically do a great deal of damage in low-lying ground around such inlets. Frequently, the wave front in the inlet is nearly vertical, as in a tidal bore, and the speed of onrush may be on the order of 10 metres per second. In some cases there are several great waves separated by intervals of several minutes or more. The first of these waves is often preceded by an extraordinary recession of water from the shore, which may commence several minutes or even half an hour beforehand.

Organizations, notably inJapan,Siberia,Alaska, andHawaii, have been set up to provide tsunami warnings. A key development is the Seismic Sea Wave Warning System, an internationally supported system designed to reduce loss of life in thePacific Ocean. Centred inHonolulu, it issues alerts based on reports of earthquakes from circum-Pacific seismographic stations.

Seiches

Seiches are rhythmic motions of water in nearly landlocked bays or lakes that are sometimes induced by earthquakes and tsunamis. Oscillations of this sort may last for hours or even for 1-day or 2-days.

The great Lisbon earthquake of 1755 caused the waters of canals and lakes in regions as far away as Scotland and Sweden to go into observable oscillations. Seiche surges in lakes in Texas, in the southwestern United States, commenced between 30 and 40 minutes after the 1964 Alaska earthquake, produced by seismic surface waves passing through the area.

A related effect is the result of seismic waves from an earthquake passing through the seawater following their refraction through the seafloor. The speed of these waves is about 1.5 km (0.9 mile) per second, the speed of sound in water. If such waves meet a ship with sufficient intensity, they give the impression that the ship has struck a submerged object. This phenomenon is called a seaquake.

Earthquake Intensity and Magnitude Scales

The violence of seismic shaking varies considerably over a single affected area. Because the entire range of observed effects is not capable of simple quantitative definition, the strength of the shaking is commonly estimated by reference to intensity scales that describe the effects in qualitative terms. Intensity scales date from the late 19th and early 20th centuries, before seismographs capable of accurate measurement of ground motion were developed. Since that time, the divisions in these scales have been associated with measurable accelerations of the local ground shaking. Intensity depends, however, in a complicated way not only on ground accelerations but also on the periods and other features of seismic waves, the distance of the measuring point from the source, and the local geologic structure. Furthermore, earthquake intensity, or strength, is distinct from earthquake magnitude, which is a measure of the amplitude, or size, of seismic waves as specified by a seismograph reading ( see below Earthquake magnitude )

A number of different intensity scales have been set up during the past century and applied to both current and ancient destructive earthquakes. For many years the most widely used was a 10-point scale devised in 1878 by Michele Stefano de Rossi and Franƈois-Alphonse Forel. The scale now generally employed in North America is the Mercalli scale, as modified by Harry O. Wood and Frank Neumann in 1931, in which intensity is considered to be more suitably graded.

A 12-point abridged form of the modified Mercalli scale is provided below. Modified Mercalli intensity VIII is roughly correlated with peak accelerations of about one-quarter that of gravity ( g = 9.8 metres, or 32.2 feet, per second squared ) and ground velocities of 20 cm (8 inches) per second. Alternative scales have been developed in bothJapan andEurope for local conditions.

The European ( MSK ) scale of 12 grades is similar to the abridged version of the Mercalli.

Modified Mercalli scale of earthquake intensity

  I. Not felt. Marginal and long-period effects of large earthquakes.

  II. Felt by persons at rest, on upper floors, or otherwise favourably placed to sense tremors.

  III. Felt indoors. Hanging objects swing. Vibrations are similar to those caused by the passing of light trucks. Duration can be estimated.

  IV. Vibrations are similar to those caused by the passing of heavy trucks (or a jolt similar to that caused by a heavy ball striking the walls). Standing automobiles rock. Windows, dishes, doors rattle. Glasses clink, crockery clashes. In the upper range of grade IV, wooden walls and frames creak.

  V. Felt outdoors; direction may be estimated. Sleepers awaken. Liquids are disturbed, some spilled. Small objects are displaced or upset. Doors swing, open, close. Pendulum clocks stop, start, change rate.

  VI. Felt by all; many are frightened and run outdoors. Persons walk unsteadily. Pictures fall off walls. Furniture moves or overturns. Weak plaster and masonry cracks. Small bells ring (church, school). Trees, bushes shake.

  VII. Difficult to stand. Noticed by drivers of automobiles. Hanging objects quivering. Furniture broken. Damage to weak masonry. Weak chimneys broken at roof line. Fall of plaster, loose bricks, stones, tiles, cornices. Waves on ponds; water turbid with mud. Small slides and caving along sand or gravel banks. Large bells ringing. Concrete irrigation ditches damaged.

  VIII. Steering of automobiles affected. Damage to masonry; partial collapse. Some damage to reinforced masonry; none to reinforced masonry designed to resist lateral forces. Fall of stucco and some masonry walls. Twisting, fall of chimneys, factory stacks, monuments, towers, elevated tanks. Frame houses moved on foundations if not bolted down; loose panel walls thrown out. Decayed pilings broken off. Branches broken from trees. Changes in flow or temperature of springs and wells. Cracks in wet ground and on steep slopes.

  IX. General panic. Weak masonry destroyed; ordinary masonry heavily damaged, sometimes with complete collapse; reinforced masonry seriously damaged. Serious damage to reservoirs. Underground pipes broken. Conspicuous cracks in ground. In alluvial areas, sand and mud ejected; earthquake fountains, sand craters.

  X. Most masonry and frame structures destroyed with their foundations. Some well-built wooden structures and bridges destroyed. Serious damage to dams, dikes, embankments. Large landslides. Water thrown on banks of canals, rivers, lakes, and so on. Sand and mud shifted horizontally on beaches and flat land. Railway rails bent slightly.

  XI. Rails bent greatly. Underground pipelines completely out of service.

  XII. Damage nearly total. Large rock masses displaced. Lines of sight and level distorted. Objects thrown into air.

With the use of an intensity scale, it is possible to summarize such data for an earthquake by constructing isoseismal curves, which are lines that connect points of equal intensity. If there were complete symmetry about the vertical through the earthquake’s focus, isoseismals would be circles with the epicentre (the point at the surface of the Earth immediately above where the earthquake originated) as the centre. However, because of the many unsymmetrical geologic factors influencing intensity, the curves are often far from circular. The most probable position of the epicentre is often assumed to be at a point inside the area of highest intensity. In some cases, instrumental data verify this calculation, but not infrequently the true epicentre lies outside the area of greatest intensity.

Earthquake Magnitude

Earthquake magnitude is a measure of the “size” or amplitude of the seismic waves generated by an earthquake source and recorded by seismographs.

Types and nature of these waves are described in Seismic waves ( further below ).

Because the size of earthquakes varies enormously, it is necessary for purposes of comparison to compress the range of wave amplitudes measured on seismograms by means of a mathematical device.

In 1935, American seismologist Charles F. Richter set up a magnitude scale of earthquakes as the logarithm to base 10 of the maximum seismic wave amplitude ( in thousandths of a millimetre ) recorded on a standard seismograph ( the Wood-Anderson torsion pendulum seismograph ) at a distance of 60-miles ( 100 kilometers ) from the earthquake epicentre.

Reduction of amplitudes observed at various distances to the amplitudes expected at the standard distance of 100 kilometers ( 50-miles ) is made on the basis of empirical tables.

Richter magnitudes ML are computed on the assumption the ratio of the maximum wave amplitudes at two ( 2 ) given distances is the same for all earthquakes and is independent of azimuth.

Richter first applied his magnitude scale to shallow-focus earthquakes recorded within 600 km of the epicentre in the southern California region. Later, additional empirical tables were set up, whereby observations made at distant stations and on seismographs other than the standard type could be used. Empirical tables were extended to cover earthquakes of all significant focal depths and to enable independent magnitude estimates to be made from body- and surface-wave observations.

A current form of the Richter scale is shown in the table.

Richter scale of earthquake magnitude

magnitude level

category

effects

earthquakes per year

less than 1.0 to 2.9

micro

generally not felt by people, though recorded on local instruments

more than 100,000

3.0-3.9

minor

felt by many people; no damage

12,000-100,000

4.0-4.9

light

felt by all; minor breakage of objects

2,000-12,000

5.0-5.9

moderate

some damage to weak structures

200-2,000

6.0-6.9

strong

moderate damage in populated areas

20-200

7.0-7.9

major

serious damage over large areas; loss of life

3-20

8.0 and higher

great

severe destruction and loss of life over large areas

fewer than 3

At the present time a number of different magnitude scales are used by scientists and engineers as a measure of the relative size of an earthquake. The P-wave magnitude (Mb), for one, is defined in terms of the amplitude of the P wave recorded on a standard seismograph. Similarly, the surface-wave magnitude (Ms) is defined in terms of the logarithm of the maximum amplitude of ground motion for surface waves with a wave period of 20 seconds.

As defined, an earthquake magnitude scale has no lower or upper limit. Sensitive seismographs can record earthquakes with magnitudes of negative value and have ‘recorded magnitudes up to’ about ‘9.0’ ( 1906 San Francisco earthquake, for example, had a Richter magnitude of 8.25 ).

A scientific weakness is that there is no direct mechanical basis for magnitude as defined above. Rather, it is an empirical parameter analogous to stellar magnitude assessed by astronomers. In modern practice a more soundly based mechanical measure of earthquake size is used—namely, the seismic moment (M0). Such a parameter is related to the angular leverage of the forces that produce the slip on the causative fault. It can be calculated both from recorded seismic waves and from field measurements of the size of the fault rupture. Consequently, seismic moment provides a more uniform scale of earthquake size based on classical mechanics. This measure allows a more scientific magnitude to be used called moment magnitude (Mw). It is proportional to the logarithm of the seismic moment; values do not differ greatly from Ms values for moderate earthquakes. Given the above definitions, the great Alaska earthquake of 1964, with a Richter magnitude (ML) of 8.3, also had the values Ms = 8.4, M0 = 820 × 1027 dyne centimetres, and Mw = 9.2

Earthquake Energy

Energy in an earthquake passing a particular surface site can be calculated directly from the recordings of seismic ground motion, given, for example, as ground velocity. Such recordings indicate an energy rate of 105 watts per square metre (9,300 watts per square foot) near a moderate-size earthquake source. The total power output of a rupturing fault in a shallow earthquake is on the order of 1014 watts, compared with the 105 watts generated in rocket motors.

The surface-wave magnitude Ms has also been connected with the surface energy Es of an earthquake by empirical formulas. These give Es = 6.3 × 1011 and 1.4 × 1025 ergs for earthquakes of Ms = 0 and 8.9, respectively. A unit increase in Ms corresponds to approximately a 32-fold increase in energy. Negative magnitudes Ms correspond to the smallest instrumentally recorded earthquakes, a magnitude of 1.5 to the smallest felt earthquakes, and one of 3.0 to any shock felt at a distance of up to 20 km ( 12 miles ). Earthquakes of magnitude 5.0 cause light damage near the epicentre; those of 6.0 are destructive over a restricted area; and those of 7.5 are at the lower limit of major earthquakes.

The total annual energy released in all earthquakes is about 1025 ergs, corresponding to a rate of work between 10,000,000 million and 100,000,000 million kilowatts. This is approximately one ( 1 ) 1,000th the ‘annual amount of heat escaping from the Earth interior’.

90% of the total seismic energy comes from earthquakes of ‘magnitude 7.0 and higher’ – that is, those whose energy is on the order of 1023 ergs or more.

Frequency

There also are empirical relations for the frequencies of earthquakes of various magnitudes. Suppose N to be the average number of shocks per year for which the magnitude lies in a range about Ms. Then log10 N = abMs fits the data well both globally and for particular regions; for example, for shallow earthquakes worldwide, a = 6.7 and b = 0.9 when Ms > 6.0. The frequency for larger earthquakes therefore increases by a factor of about 10 when the magnitude is diminished by one unit. The increase in frequency with reduction in Ms falls short, however, of matching the decrease in the energy E. Thus, larger earthquakes are overwhelmingly responsible for most of the total seismic energy release. The number of earthquakes per year with Mb > 4.0 reaches 50,000.

Earthquake Occurrences & Plate Tectonic associations

Global seismicity patterns had no strong theoretical explanation until the dynamic model called plate tectonics was developed during the late 1960s. This theory holds that the Earth’s upper shell, or lithosphere, consists of nearly a dozen large, quasi-stable slabs called plates. The thickness of each of these plates is roughly 50-miles ( 80 km ). Plates move horizontally relative to neighbouring plates at a rate of 0.4 to 4 inches ( 1-cm to 10-cm ) per year over a shell of lesser strength called the asthenosphere. At the plate edges where there is contact between adjoining plates, boundary tectonic forces operate on the rocks, causing physical and chemical changes in them. New lithosphere is created at oceanic ridges by the upwelling and cooling of magma from the Earth’s mantle. The horizontally moving plates are believed to be absorbed at the ocean trenches, where a subduction process carries the lithosphere downward into the Earth’s interior. The total amount of lithospheric material destroyed at these subduction zones equals that generated at the ridges. Seismological evidence ( e.g. location of major earthquake belts ) is everywhere in agreement with this tectonic model.

Earthquake Types:

– Shallow Earthquakes;

– Intermediate Earthquakes;

– Deep Focus ( Deep-Foci ) Earthquakes; and

– Deeper Focus ( Deeper-Foci ) Earthquakes.

Earthquake sources, are concentrated along oceanic ridges, corresponding to divergent plate boundaries.

At subduction zones, associated with convergent plate boundaries, deep-focus earthquakes and intermediate focus earthquakes mark locations of the upper part of a dipping lithosphere slab.

Focal ( Foci ) mechanisms indicate stresses aligned with dip of the lithosphere underneath the adjacent continent or island arc.

IntraPlate Seismic Event Anomalies

Some earthquakes associated with oceanic ridges are confined to strike-slip faults, called transform faults offset ridge crests. The majority of earthquakes occurring along such horizontal shear faults are characterized by slip motions.

Also in agreement with plate tectonics theory is high seismicity encountered along edges of plates where they slide past each other. Plate boundaries of this kind, sometimes called fracture zones include, the:

San Andreas Fault system in California; and,

– North Anatolian fault system in Turkey.

Such plate boundaries are the site of interplate earthquakes of shallow focus.

Low seismicity within plates is consistent with plate tectonic description. Small to large earthquakes do occur in limited regions well within the boundaries of plates, however such ‘intraplate seismic events’ can be explained by tectonic mechanisms other than plate boundary motions and their associated phenomena.

Most parts of the world experience at least occasional shallow earthquakes – those that originate within 60 km ( 40 miles ) of the Earth’s outer surface. In fact, the great ‘majority of earthquake foci ( focus ) are shallow’. It should be noted, however, that the geographic distribution of smaller earthquakes is less completely determined than more severe quakes, partly because the ‘availability of relevant data dependent on distribution of observatories’.

Of the total energy released in earthquakes, 12% comes from intermediate earthquakes—that is, quakes with a focal depth ranging from about 60 to 300 km. About 3 percent of total energy comes from deeper earthquakes. The frequency of occurrence falls off rapidly with increasing focal depth in the intermediate range. Below intermediate depth the distribution is fairly uniform until the greatest focal depths, of about 700 km (430 miles), are approached.

Deeper-Focus Earthquakes

Deeper-focus earthquakes commonly occur in patterns called Benioff zones dipping into the Earth, indicating presence of a subducting slab where dip angles of these slabs average about 45° – with some shallower – and others nearly vertical.

Benioff zones coincide with tectonically active island arcs, such as:

– Aleutian islands;

– Japan islands;

– Vanuatu islands; and

– Tonga.

Island arcs are, normally ( but not always ) associated, with:

Ultra-Deep Sea Ocean Trenches, such as the:

South America ( Andes mountain system ).

Exceptions to this rule, include:

– Romania ( East Europe ) mountain system; and

Hindu Kush mountain system.

Most Benioff zones,  deep-earthquake foci and intermediate-earthquake foci are usually found within a narrow layer, however recent more precise hypocentral locations – in Japan and elsewhere – indicate two ( 2 ) distinct parallel bands of foci only 12 miles ( 20 kilometers ) apart.

Aftershocks, Swarms and Foreshocks

Major or even moderate earthquake of shallow focus is followed by many lesser-size earthquakes close to the original source region. This is to be expected if the fault rupture producing a major earthquake does not relieve all the accumulated strain energy at once. In fact, this dislocation is liable to cause an increase in the stress and strain at a number of places in the vicinity of the focal region, bringing crustal rocks at certain points close to the stress at which fracture occurs. In some cases an earthquake may be followed by 1,000 or more aftershocks a day.

Sometimes a large earthquake is followed by a similar one along the same fault source within an hour or perhaps a day. An extreme case of this is multiple earthquakes. In most instances, however, the first principal earthquake of a series is much more severe than the aftershocks. In general, the number of aftershocks per day decreases with time.

Aftershock frequency, is ( roughly ):

Inversely proportional to time since occurrence of largest earthquake in series.

Most major earthquakes occur without detectable warning, but some principal earthquakes are preceded by foreshocks.

Japan 2-Years of Hundreds of Thousands Of Earthquakes

In another common pattern, large numbers of small earthquakes may occur in a region for months without a major earthquake.

In the Matsushiro region of Japan, for instance, there occurred ( between August 1965 and August 1967 ) a ‘series of earthquakes’ numbering in the hundreds of thousands – some sufficiently strong ( up to Richter magnitude 5.0 ) causing property damage but no casualties.

Maximum frequency? 6,780 small earthquakes on just April 17, 1966.

Such series of earthquakes are called earthquake swarms.

Earthquakes, associated with volcanic activity often occur in swarms – though swarms also have been observed in many nonvolcanic regions.

Study of Earthquakes

Seismic waves

Principal types of seismic waves

Seismic Waves ( S Waves ), generated by an earthquake source, are commonly classified into three ( 3 ) ‘leading types’.

The first two ( 2 ) leading types, propagate ( travel ) within the body of the Earth, are known as:

P ( Primary ) Seismic Waves; and,

S ( Secondary ) Seismic Waves ( S / S Waves ).

The third ( 3rd ) leading types, propagate ( travel ) along surface of the Earth, are known as:

L ( Love ) Seismic Waves; and,

R ( Rayleigh ) Seismic Waves.

During the 19th Century, existence of these types of seismic waves were mathematically predicted, and modern comparisons show close correspondence between such ‘theoretical calculations‘ and ‘actual measurements’ of seismic waves.

P seismic waves travel as elastic motions at the highest speeds, and are longitudinal waves transmitted by both solid and liquid materials within inner Earth.

P waves ( particles of the medium) vibrate in a manner ‘similar to sound waves’ transmitting media ( alternately compressed and expanded ).

The slower type of body wave, the S wave, travels only through solid material. With S waves, the particle motion is transverse to the direction of travel and involves a shearing of the transmitting rock.

Focus ( Foci )

Because of their greater speed, P waves are the first ( 1st ) to reach any point on the Earth’s surface. The first ( 1st ) P-wave onset ‘starts from the spot where an earthquake originates’. This point, usually at some depth within the Earth, is called the focus (also known as ) hypocentre.

Epicenter

Point ‘at the surface’ ( immediately ‘above the Focus / Foci’ ) is known as the ‘epicenter’.

Love waves and Rayleigh waves, guided by the free surface of the Earth, trail after P and S waves have passed through the body of planet Earth.

Rayleigh waves ( R Waves ) and Love waves ( L Waves ) involve ‘horizontal particle motion’, however ‘only Rayleigh waves exhibit ‘vertical ground displacements’.

Rayleigh waves ( R waves ) and Love ( L waves ) travel ( propagate ) and disperse into long wave trains, when occurring away-from ‘alluvial basin sources’, at substantial distances cause much of the Earth surface ground shaking felt during earthquakes.

Seismic Wave Focus ( Foci ) Properties

At all distances from the focus ( foci ), mechanical properties of rocks, such as incompressibility, rigidity and density play roles, in:

– Speed of ‘wave travel’;

– Duration of ‘wave trains’; and,

– Shape of ‘wave trains’.

Layering of the rocks and the physical properties of surface soil also affect wave characteristics.

In most cases, ‘elastic behaviors occur in earthquakes, however strong shaking ( of surface soils from the incident seismic waves ) sometimes result in ‘nonelastic behavior’, including slumping ( i.e., downward and outward movement of unconsolidated material ) and liquefaction of sandy soil.

Seismic wave that encounters a boundary separating ‘rocks of different elastic properties’ undergo reflection and refraction where a special complication exists because conversion between wave types usually also occur at such a boundary where an incident P or S wave can yield reflected P and S waves and refracted P and S waves.

Between Earth structural layers, boundaries give rise to diffracted and scattered waves, and these additional waves are partially responsible for complications observed in ground motion during earthquakes.

Modern research is concerned with ‘computing, synthetic records of ground motion realistic comparisons with observed actual ground shaking’, using wave theory in complex structures.

Grave Duration Long-Periods, Audible Earthquake Frequencies, and Other Earth Anomalies

Frequency range of seismic waves is widely varied, from being ‘High Frequency’ ( HF ) as an ‘audible range’ ( i.e. greater than > 20 hertz ) to, as Low Frequency ( LF ) as subtle as ‘free oscillations of planet Earth’ – with grave Long-Periods being 54-minutes ( see below Long-Period oscillations of the globe ).

Seismic wave attenuations in rock imposes High-Frequency ( HF ) limits, and in small to moderate earthquakes the dominant frequencies extend in Surface Waves from about 1.0 Hz to 0.1 Hertz.

Seismic wave amplitude range is also great in most earthquakes.

Displacement of ground ranges, from: 10−10 to 10−1 metre ( 4−12 to 4-inches ).

Great Earthquake Speed

Great Earthquake Ground Speed Moves Faster Than 32-Feet Per Second, Squared ( 9.8 Metres Per Second, Squared ) –

In the greatest earthquakes, ground amplitude of predominant P waves may be several centimetres at periods of 2-seconds to 5-seconds, however very close to seismic sources of ‘great earthquakes’, investigators measured ‘large wave amplitudes’ with ‘accelerations of the ground exceeding ( speed of gravity ) 32.2 feet per second squared ( 9.8 meters per second, squared ) at High Frequencies ( HF ) and ground displacements of 1 metre at Low Frequencies ( LF ).

Seismic Wave Measurement

Seismographs and Accelerometers

Seismographs ‘measure ground motion’ in both ‘earthquakes’ and ‘microseisms’ ( small oscillations described below ).

Most of these instruments are of the pendulum type. Early mechanical seismographs had a pendulum of large mass ( up to several tons ) and produced seismograms by scratching a line on smoked paper on a rotating drum.

In later instruments, seismograms ( also known as seismometers ) recorded via ‘rays of light bounced off a mirror’ within a galvanometer using electric current from electromagnetic induction ‘when the pendulum of the seismograph moved’.

Technological developments in electronics have given rise to ‘higher-precision pendulum seismometers’ and ‘sensors of ground motion’.

In these instruments electric voltages produced by motions of the pendulum or the equivalent are passed through electronic circuitry to ‘amplify ground motion digitized for more exactness’ readings.

Seismometer Nomenclature Meanings

Seismographs are divided into three ( 3 ) types of instruments knowingly confused by the public because of their varied names, as:

– Short-Period;

– Intermediate-Period ( also known as Long-Period );

– Long-Period ( also known as Intermediate-Period );

– Ultra-Long-Period ( also known as Broadband or Broad-Band ); or,

– Broadband ( also known as Ultra Long-Period or UltraLong-Period ).

Short-Period instruments are used to record P and S body waves with high magnification of the ground motion. For this purpose, the seismograph response is shaped to peak at a period of about 1-second or less.

Intermediate-period instruments, the type used by the World-Wide Standardized Seismographic Network ( WWSSN ) – described in the section Earthquake observatories – had about a 20-second ( maximum ) response.

Recently, in order to provide as much flexibility as possible for research work, the trend has been toward the operation of ‘very broadband seismographs’ digitizing representation of signals. This is usually accomplished with ‘very long-period pendulums’ and ‘electronic amplifiers’ passing signals in the band between 0.005 Hz and 50 Hertz.

When seismic waves close to their source are to be recorded, special design criteria are needed. Instrument sensitivity must ensure that the largest ground movements can be recorded without exceeding the upper scale limit of the device. For most seismological and engineering purposes the wave frequencies that must be recorded are higher than 1 hertz, and so the pendulum or its equivalent can be small. For this reason accelerometers that measure the rate at which the ground velocity is changing have an advantage for strong-motion recording. Integration is then performed to estimate ground velocity and displacement. The ground accelerations to be registered range up to two times that of gravity. Recording such accelerations can be accomplished mechanically with short torsion suspensions or force-balance mass-spring systems.

Because many strong-motion instruments need to be placed at unattended sites in ordinary buildings for periods of months or years before a strong earthquake occurs, they usually record only when a trigger mechanism is actuated with the onset of ground motion. Solid-state memories are now used, particularly with digital recording instruments, making it possible to preserve the first few seconds before the trigger starts the permanent recording and to store digitized signals on magnetic cassette tape or on a memory chip. In past design absolute timing was not provided on strong-motion records but only accurate relative time marks; the present trend, however, is to provide Universal Time ( the local mean time of the prime meridian ) by means of special radio receivers, small crystal clocks, or GPS ( Global Positioning System ) receivers from satellite clocks.

Prediction of strong ground motion and response of engineered structures in earthquakes depends critically on measurements of the spatial variability of earthquake intensities near the seismic wave source. In an effort to secure such measurements, special arrays of strong-motion seismographs have been installed in areas of high seismicity around the world.

Large-aperture seismic arrays (linear dimensions on the order of about 1/2 mile ( 0.6 mile ) to about 6 miles ( 1 kilometer to 10 kilometers ) of strong-motion accelerometers now used to improve estimations of speed, direction of propagation and types of seismic wave components.

Particularly important for full understanding of seismic wave patterns at the ground surface is measurement of the variation of wave motion with depth where to aid in this effort special digitally recording seismometers have been ‘installed in deep boreholes’.

Ocean-Bottom Measurements

70% of the Earth’s surface is covered by water so, ocean-bottom seismometers augment ( add to ) global land-based system of recording stations.

Field tests have established the feasibility of extensive long-term recording by instruments on the seafloor.

Japan has a ‘semi-permanent seismograph’ system of this type placed on the seafloor off the Pacific Ocean eastcoast of centralHonshu,Japan in 1978 by means of a ‘cable’.

Because of mechanical difficulties maintaining ‘permanent ocean-bottom instrumentation’, different systems have been considered.

They ‘all involve placement of instruments on the ocean bottom’, though they employ various mechanisms for data transmission.

Signals may be transmitted to the ocean surface for retransmission by auxiliary apparatus or transmitted via cable to a shore-based station. Another system is designed to release its recording device automatically, allowing it to float to the surface for later recovery.

Ocean bottom seismograph use should yield much-improved global coverage of seismic waves and provide new information on the seismicity of oceanic regions.

Ocean-bottom seismographs will enable investigators to determine the details of the crustal structure of the seafloor and, because of the relative ‘thinness of the oceanic crust‘, should make possible collection of clear seismic information about Earth’s upper mantle.

Ocean bottom seismograph systems are also expected to provide new data, on Earth:

– Continental Shelf Plate Boundaries;

– MicroSeisms ( origins and propagations ); and,

– Ocean to Continent behavior margins.

MicroSeisms Measurements

MicroSeisms ( also known as ) ‘small ground motions’ are commonly recorded by seismographs. Small weak seismic wave motions ( also known as ) MicroSeisms are ‘not generated by earthquakes’ but in some instances can complicate accurate earthquake measurement recording. MicroSeisms are of scientific interest because their form relates to Earth surface structure.

Microseisms ( some ) have ‘local cause’, for example:

Microseisms due to traffic ( or machinery ) or local wind effects, storms and rough surf against an extended steep coastline.

Another class of microseisms exhibits features that are very similar on records traced at earthquake observatories that are widely separated, including approximately simultaneous occurrence of maximum amplitudes and similar wave frequencies. These microseisms may persist for many hours and have more or less regular periods of about five to eight seconds.

The largest amplitudes of such microseisms are on the order of 10−3 cm ( 0.0004 inch ) and ‘occur in coastal regions’. Amplitudes also depend to some extent on local geologic structure.

Some microseisms are produced when ‘large standing water waves are formed far out at sea’. The period of this type of microseism is ‘half’ of the Standing Wave.

Observations of Earthquakes

Earthquake Observatories

During the late 1950s, there were only about seven-hundred ( 700 ) seismographic stations worldwide, equipped with seismographs of various types and frequency responses – few instruments of which were calibrated; actual ground motions could not be measured, and ‘timing errors of several seconds’ were common.

The World-Wide Standardized Seismographic Network ( WWSSN ), became the first modern worldwide standardized system established to remedy that situation.

Each of the WWSSN had six ( 6 ) seismograph stations with three ( 3 ) short-period and three ( 3 ) long-period seismographs with timing and accuracy maintained by quartz crystal clocks, and a calibration pulse placed daily on each record.

By 1967, the WWSSN consisted of about one-hundred twenty ( 120 ) stations throughout sixty ( 60 ) countries, resulting in data to provide the basis for significant advances in research, on:

– Earthquakes ( mechanisms );

– Plate Tectonics ( global ); and,

– Deep-Structure Earth ( interior ).

By the 1980s a further upgrading of permanent seismographic stations began with the installation of digital equipment by a number of organizations.

Global digital seismograph station networks, now in operation, consist of:

– Seismic Research Observatories ( SRO ) within boreholes drilled 330 feet ( 100 metres ) deep in Earth ground; and,

– Modified high-gain long-period earthquake observatories located on Earth ground surfaces.

The Global Digital Seismographic Network in particular has remarkable capability, recording all motions from Earth ocean tides to microscopic ground motions at the level of local ground noise.

At present there are about 128 sites. With this system the long-term seismological goal will have been accomplished to equip global observatories with seismographs that can record every small earthquake anywhere over a broad band of frequencies.

Epicentre Earthquakes Located

Many observatories make provisional estimates of the epicentres of important earthquakes. These estimates provide preliminary information locally about particular earthquakes and serve as first approximations for the calculations subsequently made by large coordinating centres.

If an earthquake’s epicentre is less than 105° away from an earthquake observatory, the epicentre position can often be estimated from the readings of three ( 3 ) seismograms recording perpendicular components of the ground motion.

For a shallow earthquake the epicentral distance is indicated by the interval between the arrival times of P and S waves; the azimuth and angle of wave emergence at the surface indicated by somparing sizes and directions of the first ( 1st ) movements indicated by seismograms and relative sizes of later waves – particularly surface waves.

Anomaly

It should be noted, however, that in certain regions the first ( 1st ) wave movement at a station arrives from a direction differing from the azimuth toward the epicentre. This ‘anomaly is usually explained’ by ‘strong variations in geologic structures’.

When data from more than one observatory are available, an earthquake’s epicentre may be estimated from the times of travel of the P and S waves from source to recorder. In many seismically active regions, networks of seismographs with telemetry transmission and centralized timing and recording are common. Whether analog or digital recording is used, such integrated systems greatly simplify observatory work: multichannel signal displays make identification and timing of phase onsets easier and more reliable. Moreover, online microprocessors can be programmed to pick automatically, with some degree of confidence, the onset of a significant common phase, such as P, by correlation of waveforms from parallel network channels. With the aid of specially designed computer programs, seismologists can then locate distant earthquakes to within about 10 km (6 miles) and the epicentre of a local earthquake to within a few kilometres.

Catalogs of earthquakes felt by humans and of earthquake observations have appeared intermittently for many centuries. The earliest known list of instrumentally recorded earthquakes with computed times of origin and epicentres is for the period 1899 – 1903, afterwhich cataloging of earthquakes became more uniform and complete.

Especially valuable is the service provided by the International Seismological Centre ( ISC ) in Newbury, UK that monthly receives more than 1,000,000 seismic readings from more than 2,000 seismic monitoring stations worldwide and preliminary estimates locations of approximately 1,600 earthquakes from national and regional agencies and observatories.

The ISC publishes a monthly bulletin about once ( 1 ) every 2-years. The bulletin, when published, provides ‘all available information that was’ on each of more than 5,000 earthquakes.

Various national and regional centres control networks of stations and act as intermediaries between individual stations and the international organizations.

Examples of long-standing national centers include, the:

Japan Meteorological Agency; and,

U.S. National Earthquake Information Center ( NEIC ), a subdivision of the U.S. Geological Survey ( USGS ).

Centers, such as the aforementioned, normally make ‘local earthquake estimates’, of:

– Magnitude;

– Epicentre;

– Time origin; and,

– Focal depth.

Global seismicity data is continually accessible via Incorporated Research Institutions for Seismology ( IRIS ) website.

An important research technique infers the character of faulting ( in an earthquake ) from recorded seismograms.

For example, observed distributions ( of the directions of the first onsets in waves arriving at the Earth’s surface ) have been effectively used.

Onsets are called “compressional” or “dilatational,” according to whether the direction is ‘away from’ or ‘toward’ the focus, respectively.

A polarity pattern becomes recognizable when the directions of the P-wave onsets are plotted on a map – there are broad areas in which the first onsets are predominantly compressions, separated from predominantly dilatational areas by nodal curves near which the P-wave amplitudes are abnormally small.

In 1926 the American geophysicist Perry E. Byerly used patterns of P onsets over the entire globe to infer the orientation of the fault plane in a large earthquake. The polarity method yields two P-nodal curves at the Earth’s surface; one curve is in the plane containing the assumed fault, and the other is in the plane ( called the auxiliary plane ) that passes through the focus and is perpendicular to the forces of the plane.

The recent availability of worldwide broad-based digital recording enabled computer programs written estimating the fault mechanism and seismic moment based on complete pattern of seismic wave arrivals.

Given a well-determined pattern at a number of earthquake observatories, it is possible to locate two ( 2 ) planes, one ( 1 ) of which is the plane containing the fault.

Earthquake Prediction

Earthquake Observations & Interpretations

Statistical earthquake occurrences are believed theorized, not widely accepted nor detecting periodic cycles, records of which old periodicities in time and space for major / great earthquakes cataloged are as old as 700 B.C. with China holding the ‘world’s most extensive catalog’ of approximately one ( 1 ) one-thousand ( 1,000 ) destructive earthquakes where ‘magnitude ( size )’ measurements were assessed based on ‘damage reports’ and experienced periods of ‘shaking’ and ‘other observations’ determining ‘intensity’ of those earthquakes.

Earthquake Attributions to Postulation

Precursor predictability approaches involve what some believe is sheer postulating what the initial trigger mechanisms are that force Earth ruptures, however where this becomes bizarre is where such forces have been attributed, to:

Weather Severity;

– Volcano Activity; and,

– Ocean Tide Force ( Moon ).

EXAMPLE: Correlations between physical phenomena assumed providing trigger mechanisms for earthquake repetition.

Professionals believe such must always be made to discover whether a causative link is actually present, and they further believe that to-date: ‘no cases possess any trigger mechanism’ – insofaras ‘moderate earthquakes’ to ‘large earthquakes’ unequivocally finding satisfaction with various necessary criteria.

Statistical methods also have been tried with populations of regional earthquakes with such suggested, but never established generally, that the slope b of the regression line between the logarithm of the number of earthquakes and the magnitude for a region may change characteristically with time.

Specifically, the claim is that the b value for the population of ‘foreshocks of a major earthquake’ may be ‘significantly smaller’ than the mean b value for the region averaged ‘over a long interval of time’.

Elastic rebound theory, of earthquake sources, allows rough prediction of the occurrence of large shallow earthquakes – for example – Harry F. Reid gave a crude forecast of the next great earthquake near San Francisco ( theory also predicted, of-course, the place would be along the San Andreas Fault or associated fault ). Geodetic data indicated that during an interval of 50 years relative displacements of 3.2 metres ( 10-1/2 feet ) had occurred at distant points across the fault. Elastic-rebound maximum offset ( along the fault in the 1906 earthquake ) was 6.5 metres. Therefore, ( 6.5 ÷ 3.2 ) × 50 or about 100-years would again elapse before sufficient strain accumulated for the occurrence of an earthquake comparable to that of 1906; premises being regional strain will grow uniformly and various constraints have not been altered by the great 1906 rupture itself ( such as by the onset of slow fault slip ).

Such ‘strain rates’ are now, however being more adequately measured ( along a number of active faults, e.g.San Andreas Fault) using networks of GPS sensors.

Earthquake Prediction Research

For many years prediction research has been influenced by the basic argument that ‘strain accumulates in rock masses in the vicinity of a fault, resulting in crustal deformation.

Deformations have been measured in ‘horizontal directions’ along active faults via ‘trilateration’ and ‘triangulation’ and in ‘vertical directions’ via ‘precise leveling and tiltmeters’.

Investigators ( some ) believe ‘ground-water level changes occur prior to earthquakes’ with variations of such reports from China.

Ground water levels respond to an array of complex factors ( e.g. ‘rainfall’ ) where such would have to be removed if changes in water level changes were studied in relation to earthquakes.

Phenomena Precursor Premonitories

Dilatancy theory ( i.e., volume increase of rock prior to rupture ) once occupied a central position in discussions of premonitory phenomena of earthquakes, but now receives less support based on observations that many solids exhibit dilatancy during deformation. For earthquake prediction, significance of dilatancy, if real, effects various measurable quantities of crustal Earth, i.e. seismic velocity, electric resistivity and ground and water levels. Consequences of dilatancy for earthquake prediction are summarized in the table ( below ):

The best-studied consequence is the effect on seismic velocities. The influence of internal cracks and pores on the elastic properties of rocks can be clearly demonstrated in laboratory measurements of those properties as a function of hydrostatic pressure. In the case of saturated rocks, experiments predict – for shallow earthquakes – that dilatancy occurs as a portion of the crust is stressed to failure, causing a decrease in the velocities of seismic waves. Recovery of velocity is brought about by subsequent rise of the pore pressure of water, which also has the effect of weakening the rock and enhancing fault slip.

Strain buildup in the focal region may have measurable effects on other observable properties, including electrical conductivity and gas concentration. Because the electrical conductivity of rocks depends largely on interconnected water channels within the rocks, resistivity may increase before the cracks become saturated. As pore fluid is expelled from the closing cracks, the local water table would rise and concentrations of gases such as radioactive radon would increase. No unequivocal confirming measurements have yet been published.

Geologic methods of extending the seismicity record back from the present also are being explored. Field studies indicate that the sequence of surface ruptures along major active faults associated with large earthquakes can sometimes be constructed.

An example is the series of large earthquakes inTurkeyin the 20th Century, which were caused mainly by successive westward ruptures of the North Anatolian Fault.

Liquefaction effects preserved in beds of sand and peat have provided evidence ( using radiometric dating methods ) for large paleoearthquakes back more than 1,000 years in many seismically active zones, including the U.S. Northwest Pacific Ocean Coastal Region.

Less well-grounded precursory phenomena, particularly earthquake lights and animal behaviour, sometimes draw more public attention than the precursors discussed above.

Unusual lights in the sky reported, and abnormal animal behaviour, preceding earthquakes are known to seismologists – mostly in anecdotal form.

Both phenomena, are usually explained away in terms of ( prior to earthquakes ) there being:

– Gaseous emmissions from Earth ground;

– Electric stimuli ( various ), e.g. HAARP, etcetera, from Earth ground; and,

– Acoustic stimuli ( various ), e.g. Seismic Wave subsonic emmissions from Earth ground.

At the present time, there is no definitive experimental evidence supporting reported claims of animals sometimes sensing an approaching earthquake.

… [ CENSORED-OUT ] …

Earthquake Hazard Reduction Methods

Considerable work has been done in seismology to explain the characteristics of the recorded ground motions in earthquakes. Such knowledge is needed to predict ground motions in future earthquakes so that earthquake-resistant structures can be designed.

Although earthquakes cause death and destruction via such secondary effects ( i.e. landslides, tsunamis, fires and fault rupture ), the greatest losses ( human lives and property ) result from ‘collapsing man-made structures’ amidst violent ground shaking.

The most effective way to mitigate ( minimize ) damage from earthquakes – from an engineering standpoint – is to design and construct structures capable of withstanding ‘strong ground motions’.

Interpreting recorded ground motions

Most ‘elastic waves’ recorded ( close to an extended fault source ) are complicated and difficult to interpret uniquely.

Understanding such, near-source motion, can be viewed as a 3 part problem.

The first ( 1st ) part stems from ‘elastic wave generations’ radiating ( from the slipping fault ) as the ‘moving rupture sweeps-out an area of slip’ ( along the fault plane ) – within a given time.

Wave pattern production dependencies on several parameters, such as:

Fault dimension and rupture velocity.

Elastic waves ( various types ) radiate, from the vicinity of the moving rupture, in all directions.

Geometric and frictional properties of the fault, critically affect wave pattern radiation from it.

The second ( 2nd ) part of the problem concerns the passage of the waves through the intervening rocks to the site and the effect of geologic conditions.

The third ( 3rd ) part involves the conditions at the recording site itself, such as topography and highly attenuating soils. All these questions must be considered when estimating likely earthquake effects at a site of any proposed structure.

Experience has shown that the ground strong-motion recordings have a variable pattern in detail but predictable regular shapes in general ( except in the case of strong multiple earthquakes ).

EXAMPLE: Actual ground shaking ( acceleration, velocity and displacement ) recorded during an earthquake ( see figure below ).

In a strong horizontal shaking of the ground near the fault source, there is an initial segment of motion made up mainly of P waves, which frequently manifest themselves strongly in the vertical motion. This is followed by the onset of S waves, often associated with a longer-period pulse of ground velocity and displacement related to the near-site fault slip or fling. This pulse is often enhanced in the direction of the fault rupture and normal to it. After the S onset there is shaking that consists of a mixture of S and P waves, but the S motions become dominant as the duration increases. Later, in the horizontal component, surface waves dominate, mixed with some S body waves. Depending on the distance of the site from the fault and the structure of the intervening rocks and soils, surface waves are spread out into long trains.

Expectant Seismic Hazard Maps Constructed

In many regions, seismic expectancy maps or hazard maps are now available for planning purposes. The anticipated intensity of ground shaking is represented by a number called the peak acceleration or the peak velocity.

To avoid weaknesses found in earlier earthquake hazard maps, the following general principles are usually adopted today:

The map should take into account not only the size but also the frequency of earthquakes.

The broad regionalization pattern should use historical seismicity as a database, including the following factors: major tectonic trends, acceleration attenuation curves, and intensity reports.

Regionalization should be defined by means of contour lines with design parameters referred to ordered numbers on neighbouring contour lines ( this procedure minimizes sensitivity concerning the exact location of boundary lines between separate zones ).

The map should be simple and not attempt to microzone the region.

The mapped contoured surface should not contain discontinuities, so that the level of hazard progresses gradually and in order across any profile drawn on the map.

Developing resistant structures

Developing engineered structural designs that are able to resist the forces generated by seismic waves can be achieved either by following building codes based on hazard maps or by appropriate methods of analysis. Many countries reserve theoretical structural analyses for the larger, more costly, or critical buildings to be constructed in the most seismically active regions, while simply requiring that ordinary structures conform to local building codes. Economic realities usually determine the goal, not of preventing all damage in all earthquakes but of minimizing damage in moderate, more common earthquakes and ensuring no major collapse at the strongest intensities. An essential part of what goes into engineering decisions on design and into the development and revision of earthquake-resistant design codes is therefore seismological, involving measurement of strong seismic waves, field studies of intensity and damage, and the probability of earthquake occurrence.

Earthquake risk can also be reduced by rapid post-earthquake response. Strong-motion accelerographs have been connected in some urban areas, such as Los Angeles, Tokyo, and Mexico City, to interactive computers.

Recorded waves are correlated with seismic intensity scales and rapidly displayed graphically on regional maps via the World Wide Web.

Exploration of the Earth’s interior with seismic waves

Seismological Tomography

Deep Structure Earth seismological data from several sources, including:

– Nuclear explosions containing P-Waves and S-Waves;

– Earthquakes containing P-Waves and S-Waves;

– Earth ‘surface wave dispersions’ from ‘distant earthquakes’; and,

– Earth ‘planetary vibration’ from ‘Great Earthquakes’

One of the major aims of seismology was to infer a minimum set of properties surrounding the planet interior of Earth that might explain recorded seismic ‘wave trains’ in detail.

Deep Structure Earth exploration made ‘tremendous progress during the first half of the 20th Century ( 1900s – 1950s ), realizing goals was severely limited until the 1960s because of laborious effort required just to evaluate theoretical models and process large amounts of recorded earthquake data.

Today’s application of supercomputer high-speed data processing enormous quantities of stored data and information retrieval capabilities opened information technology ( IT ) passageways leading to major advancements in the way data is manipulated ( data handling ) for advanced theoretical modeling, research analytics and developmental prototyping.

Earth structure realistic modeling studies by researchers since the middle 1970s include continental and oceanic boundaries, mountains and river valleys rather than simple structures such as those involving variation only with depth, and various technical developments have benefited observational seismology.

EXAMPLE: Deep Structure Earth significant exploration using 3D ( three dimensional ) imaging with equally impressive display ( monitor ) equipment possible from advanced microprocessor architecture redesign, new discoveries of materials and new concepts making seismic exploratory techniques developed by petroleum industry adaptations ( e.g. seismic reflection ) highly recognized as adopted procedures.

Deep Structure Earth major methods for determining planet interior is detailed analysis of seismograms of seismic waves; noting earthquake readings additionally provide estimates of, Earth internal:

Wave velocities;

– Density; and,

– Parameters of ‘elasticity’ ( stretchable ) and ‘inelasticity’ ( fixed ).

Earthquake Travel Time

Primary procedure is to measure the travel times of various wave types, such as P and S, from their source to the recording seismograph. First, however, identification of each wave type with its ray path through the Earth must be made.

Seismic rays for many paths of P and S waves leaving the earthquake focus F are shown in the figure.

Deep-Focus Deep-Structure Earth Coremetrics

Rays corresponding to waves that have been reflected at the Earth’s outer surface (or possibly at one of the interior discontinuity surfaces) are denoted as PP, PS, SP, PSS, and so on. For example, PS corresponds to a wave that is of P type before surface reflection and of S type afterward. In addition, there are rays such as pPP, sPP, and sPS, the symbols p and s corresponding to an initial ascent to the outer surface as P or S waves, respectively, from a deep focus.

An especially important class of rays is associated with a discontinuity surface separating the central core of the Earth from the mantle at a depth of about 2,900 km (1,800 miles) below the outer surface. The symbol c is used to indicate an upward reflection at this discontinuity. Thus, if a P wave travels down from a focus to the discontinuity surface in question, the upward reflection into an S wave is recorded at an observing station as the ray PcS and similarly with PcP, ScS, and ScP. The symbol K is used to denote the part (of P type) of the path of a wave that passes through the liquid central core. Thus, the ray SKS corresponds to a wave that starts as an S wave, is refracted into the central core as a P wave, and is refracted back into the mantle, wherein it finally emerges as an S wave. Such rays as SKKS correspond to waves that have suffered an internal reflection at the boundary of the central core.

The discovery of the existence of an inner core in 1936 by the Danish seismologist Inge Lehmann made it necessary to introduce additional basic symbols. For paths of waves inside the central core, the symbols i and I are used analogously to c and K for the whole Earth; therefore, i indicates reflection upward at the boundary between the outer and inner portions of the central core, and I corresponds to the part (of P type) of the path of a wave that lies inside the inner portion. Thus, for instance, discrimination needs to be made between the rays PKP, PKiKP, and PKIKP. The first of these corresponds to a wave that has entered the outer part of the central core but has not reached the inner core, the second to one that has been reflected upward at the inner core boundary, and the third to one that has penetrated into the inner portion.

By combining the symbols p, s, P, S, c, K, i, and I in various ways, notation is developed for all the main rays associated with body earthquake waves.

Hidden Inner Earth Deep Structure Anomalies

The symbol J, introduced to correspond with S waves located within Earth’s inner core, is only evidence if such ( if ever ) be found for such waves. Use of times of travel along rays to infer a hidden structure is analogous to the use of X-rays in medical tomography. The method involves reconstructing an image of internal anomalies from measurements made at the outer surface. Nowadays, hundreds of thousands of travel times of P and S waves are available in earthquake catalogs for the tomographic imaging of the Earth’s interior and the mapping of internal structure.

Inner Earth Deep Structure

Thinest & Thickest Part of Earth’s Crust

Inner Earth, based on earthquake records and imaging studies, are officially represented, as:

A solid layer flowing patterns of a mantle, at its ’thickest point’ being about 1,800-miles ( 2,900 kilometers ) thick, although at its ‘thinest point’ less than 6-miles ( 10 kilometers ) beneath the ocean seafloor bed beneath the surface of the ultra-deep sea.

The thin surface rock layer surrounding the mantle is the crust, whose lower boundary is called the Mohorovičić discontinuity. In normal continental regions the crust is about 30 kilometers to 40 km thick; there is usually a superficial low-velocity sedimentary layer underlain by a zone in which seismic velocity increases with depth. Beneath this zone there is a layer in which P-wave velocities in some places fall from 6 to 5.6 km per second. The middle part of the crust is characterized by a heterogeneous zone with P velocities of nearly 6 to 6.3 km per second. The lowest layer of the crust ( about 10 km thick ) has significantly higher P velocities, ranging up to nearly 7 km per second.

In the deep ocean there is a sedimentary layer that is about 1 km thick. Underneath is the lower layer of the oceanic crust, which is about 4 km thick. This layer is inferred to consist of basalt that formed where extrusions of basaltic magma at oceanic ridges have been added to the upper part of lithospheric plates as they spread away from the ridge crests. This crustal layer cools as it moves away from the ridge crest, and its seismic velocities increase correspondingly.

Below Earth’s mantle at its ‘thickest point’ exists a shell depth of 1,800 miles ( 2,255 km ), which seismic waves indicate, has liquid property form, and at Earth’s ‘shallowest point’ only 6-miles ( 10 kilometers ) located beneath the ultra-deep seafloor of the planet ocean.

At the very centre of the planet is a separate solid core with a radius of 1,216 km. Recent work with observed seismic waves has revealed three-dimensional structural details inside the Earth, especially in the crust and lithosphere, under the subduction zones, at the base of the mantle, and in the inner core. These regional variations are important in explaining the dynamic history of the planet.

Long-Period Global Oscillations

Sometimes earthquakes can be so great, the entire planet Earth will vibrate like a ringing bell’s echo, with the deepest tone of vibration recorded by modern man on planet Earth is a period of measurement where the length of time between the arrival of successive crests in a wave train has been 54-minutes considered by human beings as ‘grave’ ( an extremely significant danger ).

Knowledge of these vibrations has come from a remarkable extension in the ‘range of periods of ground movements’ now able to be recorded by modern ‘digital long-period seismographs’ spanning the entire allowable spectrum of earthquake wave periods, from: ordinary P waves ( with periods of tenths of seconds ) to vibrations ( with periods on the order of 12-hours and 24-hours ), i.e. those movements occuring within Earth ocean tides.

The measurements of vibrations of the whole Earth provide important information on the properties of the interior of the planet. It should be emphasized that these free vibrations are set up by the energy release of the earthquake source but continue for many hours and sometimes even days. For an elastic sphere such as the Earth, two types of vibrations are known to be possible. In one type, called S modes, or spheroidal vibrations, the motions of the elements of the sphere have components along the radius as well as along the tangent. In the second [ 2nd ] type, which are designated as T modes or torsional vibrations, there is shear but no radial displacements. The nomenclature is nSl and nTl, where the letters n and l are related to the surfaces in the vibration at which there is zero motion. Four ( 4 ) examples are illustrated in the figure. The subscript n gives a count of the number of internal zero-motion ( nodal ) surfaces, and l indicates the number of surface nodal lines.

Several hundred types of S and T vibrations have been identified and the associated periods measured. The amplitudes of the ground motion in the vibrations have been determined for particular earthquakes, and, more important, the attenuation of each component vibration has been measured. The dimensionless measure of this decay constant is called the quality factor Q. The greater the value of Q, the less the wave or vibration damping. Typically, for oS10 and oT10, the Q values are about 250.

The rate of decay of the vibrations of the whole Earth with the passage of time can be seen in the figure, where they appear superimposed for 20 hours of the 12-hour tidal deformations of the Earth. At the bottom of the figure these vibrations have been split up into a series of peaks, each with a definite frequency, similar to that of the spectrum of light.

Such a spectrum indicates the relative amplitude of each harmonic present in the free oscillations. If the physical properties of the Earth’s interior were known, all these individual peaks could be calculated directly. Instead, the internal structure must be estimated from the observed peaks.

Recent research has shown that observations of long-period oscillations of the Earth discriminate fairly finely between different Earth models. In applying the observations to improve the resolution and precision of such representations of the planet’s internal structure, a considerable number of Earth models are set up, and all the periods of their free oscillations are computed and checked against the observations. Models can then be successively eliminated until only a small range remains. In practice, the work starts with existing models; efforts are made to amend them by sequential steps until full compatibility with the observations is achieved, within the uncertainties of the observations. Even so, the resulting computed Earth structure is not a unique solution to the problem.

Extraterrestrial Seismic Phenomena

Space vehicles have carried equipment onto the our Moon and Mars surface recording seismic waves from where seismologists on Earth receive telemetry signals from seismic events from both.

By 1969, seismographs had been placed at six sites on the Moon during the U.S. Apollo missions. Recording of seismic data ceased in September 1977. The instruments detected between 600 and 3,000 moonquakes during each year of their operation, though most of these seismic events were very small. The ground noise on the lunar surface is low compared with that of the Earth, so that the seismographs could be operated at very high magnifications. Because there was more than one station on the Moon, it was possible to use the arrival times of P and S waves at the lunar stations from the moonquakes to determine foci in the same way as is done on the Earth.

Moonquakes are of three types. First, there are the events caused by the impact of lunar modules, booster rockets, and meteorites. The lunar seismograph stations were able to detect meteorites hitting the Moon’s surface more than 1,000 km (600 miles) away. The two other types of moonquakes had natural sources in the Moon’s interior: they presumably resulted from rock fracturing, as on Earth. The most common type of natural moonquake had deep foci, at depths of 600 to 1,000 km; the less common variety had shallow focal depths.

Seismological research on Mars has been less successful. Only one of the seismometers carried to the Martian surface by the U.S. Viking landers during the mid-1970s remained operational, and only one potential marsquake was detected in 546 Martian days.

Historical Major Earthquakes

Major historical earthquakes chronological listing in table ( below ).

 

Major Earthquake History

Year

Region / Area

Affected

* Mag.

Intensity

Human Death

Numbers

( approx. )

Remarks

c. 1500 BCE

Knossos,

Crete

(Greece)

X

One of several events that leveled the capital of Minoan civilization, this quake accompanied the explosion of the nearby volcanic islandof Thera.

27 BCE

Thebes

(Egypt)

This quake cracked one of the statues known as the Colossi of Memnon, and for almost two centuries the “singing Memnon” emitted musical tones on certain mornings as it was warmed by the Sun’s rays.

62 CE

Pompeii

and Herculaneum

(Italy)

X

These two prosperous Roman cities had not yet recovered from the quake of 62 when they were buried by the eruption of Mount Vesuvius in 79.

115

AntiochAntakya,

(Turkey)

XI

A centre of Hellenistic and early Christian culture, Antiochsuffered many devastating quakes; this one almost killed the visiting Roman emperor Trajan.

1556

Shaanxi

( province )

China

IX

830,000

Deadliest earthquake ever recorded, possible.

1650

Cuzco

(Peru)

8.1

VIII

Many ofCuzco’s Baroque monuments date to the rebuilding of the city after this quake.

1692

Port Royal (Jamaica)

2,000

Much of thisBritish West Indiesport, a notorious haven for buccaneers and slave traders, sank beneath the sea following the quake.

1693

southeasternSicily,

(Italy)

XI

93,000

Syracuse, Catania, and Ragusa were almost completely destroyed but were rebuilt with a Baroque splendour that still attracts tourists.

1755

Lisbon,Portugal

XI

62,000

The Lisbon earthquake of 1755 was felt as far away asAlgiers and caused a tsunami that reached theCaribbean.

1780

Tabriz

(Iran)

7.7

200,000

This ancient highland city was destroyed and rebuilt, as it had been in 791, 858, 1041, and 1721 and would be again in 1927.
1811 – 1812

NewMadrid,Missouri

(USA)

7.5 – 7.7

XII

A series of quakes at the New Madrid Fault caused few deaths, but the New Madrid earthquake of 1811 – 1812 rerouted portions of the Mississippi River and was felt fromCanada to theGulf of Mexico.

1812

Caracas

(Venezuela)

9.6

X

26,000

A provincial town in 1812,Caracasrecovered and eventually becameVenezuela’s capital.

1835

Concepción,

(Chile)

8.5

35

British naturalist Charles Darwin, witnessing this quake, marveled at the power of the Earth to destroy cities and alter landscapes.

1886

Charleston,South Carolina

(USA)

IX

60

This was one of the largest quakes ever to hit the easternUnited States.

1895

Ljubljana

(Slovenia)

6.1

VIII

ModernLjubljanais said to have been born in the rebuilding after this quake.

1906

San Francisco,California

(USA)

7.9

XI

700

San Franciscostill dates its modern development from the San Francisco earthquake of 1906 and the resulting fires.

1908

Messina and Reggio di Calabria,Italy

7.5

XII

110,000

These two cities on theStraitofMessinawere almost completely destroyed in what is said to beEurope’s worst earthquake ever.

1920

Gansu

( province )

China

8.5

200,000

Many of the deaths in this quake-prone province were caused by huge landslides.

1923

Tokyo-Yokohama,

(Japan)

7.9

142,800

Japan’s capital and its principal port, located on soft alluvial ground, suffered severely from the Tokyo-Yokohama earthquake of 1923.

1931

Hawke Bay,New Zealand

7.9

256

The bayside towns of Napier and Hastings were rebuilt in an Art Deco style that is now a great tourist attraction.

1935

Quetta (Pakistan)

7.5

X

20,000

The capital of Balochistan province was severely damaged in the most destructive quake to hitSouth Asiain the 20th century.

1948

Ashgabat (Turkmenistan)

7.3

X

176,000

Every year,Turkmenistancommemorates the utter destruction of its capital in this quake.

1950

Assam,India

8.7

X

574

The largest quake ever recorded inSouth Asiakilled relatively few people in a lightly populated region along the Indo-Chinese border.

1960

Valdivia

and

Puerto Montt,

(Chile)

9.5

XI

5,700

The Chile earthquake of 1960, the largest quake ever recorded in the world, produced a tsunami that crossed the Pacific Ocean toJapan, where it killed more than 100 people.

1963

Skopje,Macedonia

6.9

X

1,070

The capital ofMacedoniahad to be rebuilt almost completely following this quake.

1964

Prince William Sound,Alaska,U.S.

9.2

131

Anchorage, Seward, and Valdez were damaged, but most deaths in the Alaska earthquake of 1964 were caused by tsunamis inAlaska and as far away asCalifornia.

1970

Chimbote,Peru

7.9

70,000

Most of the damage and loss of life resulting from the Ancash earthquake of 1970 was caused by landslides and the collapse of poorly constructed buildings.

1972

Managua,Nicaragua

6.2

10,000

The centre of the capital ofNicaraguawas almost completely destroyed; the business section was later rebuilt some 6 miles (10 km) away.

1976

Guatemala City,Guatemala

7.5

IX

23,000

Rebuilt following a series of devastating quakes in 1917–18, the capital ofGuatemalaagain suffered great destruction.

1976

Tangshan,

(China)

7.5

X

242,000

In the Tangshan earthquake of 1976, this industrial city was almost completely destroyed in the worst earthquake disaster in modern history.

1985

Michoacán state and Mexico City,Mexico

8.1

IX

10,000

The centre of Mexico City, built largely on the soft subsoil of an ancient lake, suffered great damage in the Mexico City earthquake of 1985.

1988

Spitak and Gyumri,Armenia

6.8

X

25,000

This quake destroyed nearly one-third ofArmenia’s industrial capacity.

1989

Loma Prieta,California,U.S.

7.1

IX

62

The San Francisco–Oakland earthquake of 1989, the first sizable movement of the San Andreas Fault since 1906, collapsed a section of the San Francisco–Oakland Bay Bridge.

1994

Northridge,

California

(USA)

6.8

IX

60

Centred in the urbanized San Fernando Valley, the Northridge earthquake of 1994 collapsed freeways and some buildings, but damage was limited by earthquake-resistant construction.

1995

Kobe,

(Japan)

6.9

XI

5,502

The Great Hanshin Earthquake destroyed or damaged 200,000 buildings and left 300,000 people homeless.

1999

Izmit,Turkey

7.4

X

17,000

The Izmit earthquake of 1999 heavily damaged the industrial city ofIzmit and the naval base at Golcuk.

1999

Nan-t’ou county,Taiwan

7.7

X

2,400

The Taiwan earthquake of 1999, the worst to hitTaiwan since 1935, provided a wealth of digitized data for seismic and engineering studies.

2001

Bhuj,

Gujarat

( state )

India

8.0

X

20,000

The Bhuj earthquake of 2001, possibly the deadliest ever to hitIndia, was felt acrossIndia andPakistan.

2003

Bam

(Iran)

6.6

IX

26,000

This ancientSilk Roadfortress city, built mostly of mud brick, was almost completely destroyed.

2004

Aceh

( province )

Sumatra

(Indonesia)

9.1

200,000

The deaths resulting from this offshore quake actually were caused by a tsunami originating in the Indian Ocean that, in addition to killing more than 150,000 inIndonesia, killed people as far away asSri Lanka andSomalia.

2005

Azad Kashmir

(Pakistanadministered )

( Kashmir )

7.6

VIII

80,000

The Kashmir earthquake of 2005, perhaps the deadliest shock ever to strikeSouth Asia, left hundreds of thousands of people exposed to the coming winter weather.

2008

Sichuan

( province )

(China

7.9

IX

69,000

The Sichuan earthquake of 2008 left over 5 million people homeless across the region, and over half of Beichuan city was destroyed by the initial seismic event and the release of water from a lake formed by nearby landslides.

2009

L’Aquila,

(Italy)

6.3

VIII

300

The L’Aquila earthquake of 2009 left more than 60,000 people homeless and damaged many of the city’s medieval buildings.

2010

Port-au-Prince,

(Haiti)

7.0

IX

316,000

The Haiti earthquake of 2010 devastated the metropolitan area ofPort-au-Prince and left an estimated 1.5 million survivors homeless.

2010

Maule,

(Chile)

8.8

VIII

521

The Chile earthquake of 2010 produced widespread damage inChile’s central region and triggered tsunami warnings throughout the Pacific basin.

2010

Christchurch,(New Zealand)

7.0

VIII

180

Most of the devastation associated with the Christchurch earthquakes of 2010–11 resulted from a magnitude-6.3 aftershock that struck on February 22, 2011.

2011

Honshu,

(Japan)

9.0

VIII

20,000

The powerful Japan earthquake and tsunami of 2011, which sent tsunami waves across the Pacific basin, caused widespread damage throughout easternHonshu.

2011

Erciş

And

Van,

(Turkey)

7.2

IX

The Erciş-Van earthquake of 2011 destroyed several apartment complexes and shattered mud-brick homes throughout the region.
  Data Sources: National Oceanic and Atmospheric Administration ( NOAA ), National Geophysical Data Center ( NGDC ), Significant Earthquake Database ( SED ), a searchable online database using the Catalog of Significant Earthquakes 2150 B.C. – 1991 A.D. ( with Addenda ), and U.S. Geological Survey ( USGS ), Earthquake Hazards Program.  * Measures of magnitude may differ from other sources.

ARTICLE

AdditionalReading

Earthquakes are covered mainly in books on seismology.

Recommended introductory texts, are:

Bruce A. Bolt, Earthquakes, 4th ed. (1999), and Earthquakes and Geological Discovery (1993); and,

Jack Oliver, Shocks and Rocks: Seismology and the Plate Tectonics Revolution (1996).

Comprehensive books on key aspects of seismic hazards, are:

Leon Reiter, Earthquake Hazard Analysis – Issues and Insights (1990); and,

Robert S. Yeats, Kerry Sieh, and Clarence R. Allen, The Geology of Earthquakes (1997).

A history of discrimination, between:

Underground nuclear explosions and natural earthquakes, is given by:

Bruce A. Bolt, “Nuclear Explosions and Earthquakes: The Parted Veil” ( 1976 ).

More advanced texts that treat the theory of earthquake waves in detail, are:

Agustín Udías, Principles of Seismology (1999);

Thorne Lay and Terry C. Wallace, Modern Global Seismology (1995);

Peter M. Shearer, Introduction to Seismology (1999); and,

K.E. Bullen and Bruce A. Bolt, An Introduction to the Theory of Seismology, 4th ed. (1985).

LINKS

Year in Review

Britannica provides coverage of “earthquake” in the following Year in Review articles.

Bhutan  ( in  Bhutan )

geophysics  ( in  geophysics )

Japan

Kyrgyzstan  (in  Kyrgyzstan)

Nepal  (in  Nepal)

New Zealand  (in  New Zealand )

Chile  (in  Chile: Year In Review 2010)

China  (in  China: Year In Review 2010)

“Engineering for Earthquakes”  ( in  Engineering for Earthquakes: Year In Review 2010 (earthquake) )

geophysics  (in  Earth Sciences: Year In Review 2010)

Haiti  (in  Haiti: Year In Review 2010; in  Haiti earthquake of 2010 )

Mauritius  (in  Mauritius: Year In Review 2010)

New Zealand  (in  New Zealand: Year In Review 2010 )

Bhutan  (in  Bhutan: Year In Review 2009)

Costa Rica  (in  Costa Rica: Year In Review 2009 )

geophysics  (in  Earth Sciences: Year In Review 2009)

Indonesia  (in  Indonesia: Year In Review 2009)

Italy  (in  Italy: Year In Review 2009; in  Vatican City State: Year In Review 2009 )

Samoa  (in  Samoa: Year In Review 2009)

“Major Earthquake Shakes China’s Sichuan Province, A”  ( in  A Major Earthquake Shakes China’s Sichuan Province: Year In Review 2008 (earthquake) )

China  (in  China: Year In Review 2008; in  United Nations: Year In Review 2008 )

Congo, Democratic Republic of the  (in  Democratic Republic of the Congo: Year In Review 2008)

geology  (in  Earth Sciences: Year In Review 2008)

geophysics  (in  Earth Sciences: Year In Review 2008 )

geophysics  (in  Earth Sciences: Year In Review 2007)

paleontology  (in  Life Sciences: Year In Review 2007)

Peru  (in  Peru: Year In Review 2007 )

geophysics  (in  Earth Sciences: Year In Review 2006)

glaciers  (in  Earth Sciences: Year In Review 2006)

Mozambique  (in  Mozambique: Year In Review 2006)

archaeology  ( in  Anthropology and Archaeology: Year In Review 2005 )

geophysics  (in  Earth Sciences: Year In Review 2005)

India  (in  India: Year In Review 2005 )

Pakistan(in  Pakistan: Year In Review 2005 )

“Cataclysm in Kashmir”  (in  Cataclysm in Kashmir: Year In Review 2005 (Jammu and Kashmir))

geophysics  (in  Earth Sciences: Year In Review 2004)

Japan  (in  Japan: Year In Review 2004)

tsunami  (in  The Deadliest Tsunami: Year In Review 2004 (tsunami))

geophysics  (in  Earth Sciences: Year In Review 1996)

geophysics  (in  Earth and Space Sciences: Year In Review 1995)

LINKS

Other Britannica Sites

Get involved Share

Articles from Britannica encyclopedias for elementary and high school students.

Earthquake – Children’s Encyclopedia ( Ages 8-11 ) – During an earthquake, huge masses of rock move beneath the Earth’s surface and cause the ground to shake. Earthquakes occur constantly around the world. Often they are too small for people to feel at all. Sometimes, however, earthquakes cause great losses of life and property.

Earthquake – Student Encyclopedia ( Ages 11 and up ) – Sudden shaking of the ground that occurs when masses of rock change position below Earth’s surface is called an earthquake. The shifting masses send out shock waves that may be powerful enough to alter the surface, thrusting up cliffs and opening great cracks in the ground.

The topic earthquake is discussed at the following external Web sites.

Citations

To cite this page: MLAAPAHarvardChicago Manual of Style

MLA Style: “earthquake.” Encyclopædia Britannica. Encyclopædia Britannica Online. Encyclopædia Britannica Inc., 2012. Web. 21 Mar. 2012. http://www.britannica.com/EBchecked/topic/176199/earthquake

Reference

http://www.britannica.com/EBchecked/topic/176199/earthquake/247989/Shallow-intermediate-and-deep-foci?anchor=ref105456

– – – –

Feeling ‘educated’? Think you’re out-of the earthquake and tsunami water subject?

March 23, 2012 news, however contradicts decades of professional scientific knowledge and studies so, if you were just feeling ‘overly educated’ about earthquakes and tsunamis – don’t be. You’re now lost at sea, in the same proverbial ‘boat’, with all those global government scientific and technical ( S&T ) professionals who thought they understood previous information surrounding earthquakes and tsunamis.

After comparing Japan 9.0 ‘earthquake directional arrows’, depicted on the charts ( further above ), with ocean currents, tidal charts and trade winds from the global jet stream there’s a problem that cannot be explained when on March 23, 2012 British Columbia, Canada reported its northwest Pacific Ocean coastal sea waters held a 100-foot fishing boat ‘still afloat’ – more than 1-year after the Japan tsunami from its 9.0 earthquake on March 11, 2011.

[ IMAGE ( above ): 11MAR11 Japan 9.0 earthquake tsunami vistim fishing boat ( 50-metre ) found more than 1-year later still adrift in the Pacific Ocean – but thousands of miles away – off North America Pacific Ocean west coastal territory of Haida Gwaii, British Columbia, Canada ( Click on image to enlarge ) ]

– – – –

Source: CBS News – British Columbia ( Canada )

Tsunami Linked Fishing Boat Adrift Off B.C.

Nobody Believed Aboard 50-Meter Vessel Swept Away In 2011 Japanese Disaster CBC News

March 23, 2012 21:35 ( PST ) Updated from: 23MAR12 18:59 ( PST )

A Japanese fishing boat that was washed out to sea in the March 2011 Japanese tsunami has been located adrift off the coast of British Columbia ( B.C. ), according to the federal Transport Ministry.

The 50-metre vessel was spotted by the crew of an aircraft on routine patrol about 275 kilometres off Haida Gwaii, formerly known as the Queen Charlotte Islands, ministry spokeswoman Sau Sau Liu said Friday.

“Close visual aerial inspection and hails to the ship indicate there is no one on board,” Liu said. “The owner of the vessel has been contacted and made aware of its location.”

U.S. Senator Maria Cantwell, ofWashington, said in a release that the boat was expected to drift slowly southeast.

“On its current trajectory and speed, the vessel would not [ yet ] make landfall for approximately 50-days,” Cantwell said. Cantwell did not specify where landfall was expected to be.

First large debris

The boat is the first large piece of debris found following the earthquake and tsunami that struckJapanone year ago.

Scientists, at the University of Hawaii say a field of about 18,000,000 million tonnes of debris is slowly being carried by ocean currents toward North America. The field is estimated to be about 3,200 kilometres long and 1,600 kilometres wide.

Scientists have estimated some of the debris would hit B.C. shores by 2014.

Some people on the west coast of Vancouver Island believe ‘smaller pieces of debris have already washed ashore there’.

The March 11, 2011, tsunami was generated after a magnitude 9.0 earthquake struck off the coast of northern Japan. The huge waves and swells of the tsunami moved inland and then retreated back into the Pacific Ocean, carrying human beings, wreckage of buildings, cars and boats.

Nearly 19,000 people were killed.

Reference

http://www.cbc.ca/news/canada/british-columbia/story/2012/03/23/bc-fishing-boat-tsunami-debris.html?cmp=rss

– – – –

Submitted for review and commentary by,

Kentron Intellect Research Vault

E-MAIL: KentronIntellectResearchVault@Gmail.Com

WWW: http://KentronIntellectResearchVault.WordPress.Com

References

http://earthquake.usgs.gov/earthquakes/world/japan/031111_M9.0prelim_geodetic_slip.php
http://en.wikipedia.org/wiki/Moment_magnitude_scale
http://www.gsi.go.jp/cais/topic110315.2-index-e.html
http://www.seismolab.caltech.edu
http://www.tectonics.caltech.edu/slip_history/2011_taiheiyo-oki
http://supersites.earthobservations.org/ARIA_japan_co_postseismic.pdf
ftp://sideshow.jpl.nasa.gov/pub/usrs/ARIA/README.txt
http://speclib.jpl.nasa.gov/documents/jhu_desc
http://earthquake.usgs.gov/regional/pacnw/paleo/greateq/conf.php
http://www.passcal.nmt.edu/content/array-arrays-elusive-ets-cascadia-subduction-zone
http://wcda.pgc.nrcan.gc.ca:8080/wcda/tams_e.php
http://www.pnsn.org/tremor
http://earthquake.usgs.gov/earthquakes/recenteqscanv/Quakes/quakes_all.html
http://nthmp.tsunami.gov
http://wcatwc.arh.noaa.gov
http://www.pnsn.org/NEWS/PRESS_RELEASES/CAFE/CAFE_intro.html
http://www.pnsn.org/WEBICORDER/DEEPTREM/summer2009.html
http://earthquake.usgs.gov/prepare
http://www.passcal.nmt.edu/content/usarray
http://www.iris.washington.edu/hq
http://www.iris.edu/dms/dmc
http://www.iris.edu/dhi/clients.htm
http://www.iris.edu/hq/middle_america/docs/presentations/1026/MORENO.pdf
http://www.unavco.org/aboutus/history.html
http://earthquake.usgs.gov/monitoring/anss
http://earthquake.usgs.gov/regional/asl
http://earthquake.usgs.gov/regional/asl/data
http://www.usarray.org/files/docs/pubs/US_Data_Plan_Final-V7.pdf
http://neic.usgs.gov/neis/gis/station_comma_list.asc
http://earthquake.usgs.gov/research/physics/lab
http://earthquake.usgs.gov/regional/asl/data
http://pubs.usgs.gov/gip/dynamic/Pangaea.html
http://coaps.fsu.edu/scatterometry/meeting/docs/2009_august/intro/shimoda.pdf
http://coaps.fsu.edu/scatterometry/meeting/past.php
http://eqinfo.ucsd.edu/dbrecenteqs/anza
http://www.ceri.memphis.edu/seismic
http://conceptactivityresearchvault.wordpress.com/2011/03/28/global-environmental-intelligence-gei
http://www.af.mil/information/factsheets/factsheet_print.asp?fsID=157&page=1
https://login.afwa.af.mil/register/
https://login.afwa.af.mil/amserver/UI/Login?goto=https%3A%2F%2Fweather.afwa.af.mil%3A443%2FHOST_HOME%2FDNXM%2FWRF%2Findex.html
http://www.globalsecurity.org/space/library/news/2011/space-110225-afns01.htm
http://www.wrf-model.org/plots/wrfrealtime.php

Advertisements

Secret ET Technologies

[ NOTE The video ( above ), amidst its computer graphic interface ( CGI ), manipulates many of the actual ‘image document layout photographs of symbolics technology’ and ‘laboratory premise photos of components and sub-structures’ ( removed from the U.S. government classified laboratory Project CARET ), plus ‘select photos’ of unidentified flying objects ( UFO ) bearing similar symbolics worked on by an individual using the alias name “Issac” who publicly released partial details about this story. ]

Secret ET Technologies
by, Concept Activity Research Vault ( CARV )

November 22, 2010 12:37:42 ( PST ) Update ( Published: October 23, 2010 )

USA, California, Menlo Park – November 22, 2010 – What some perceived as chicken footprints  may likely be extraterrestrial symbolic construct technologies. Years ago, an individual – using the alias name “Issac” – conveyed a multi-page report ( ” CARET ” ), laboratory photographs, and detailed personal encounters ( from at least 1984 through 1987 ) on what was believed a U.S. Department of Defense ( DoD ) Defense Advanced Research Projects Agency ( DARPA ) Program recruiting to work on a specific Project ( believed Phase II or Phase III ) studying what was ‘officially briefed’ to Issac – amongst his task team –  as highly complex relatives of  ’extraterrestrial’ structures, materials, components, construct language symbolics.

Unwanted Publicity Intelligence Annex report ( herein ) will only provide more detailed analysis on Issac provided information that news media organizations half-heartedly carried to the public years ago.

Extraterrestrial materials, although highly complex to what Issac’s group had ever seen in their lives before, saw supercomputers lumber under tasking extremely complex substrates and geometric symbolics, amongst other secret-sensitive items, that Issac and others analyzed and deciphered.

Within the building, amongst other secret-sensitive items, combinatoric studies were not limited to extremely complex substrates, symbolics, and more that developed an extremely complex ‘primer’ in which Issac’s report is named “Commercial Applications Research for ExtraTerrestrial Technology” ( C.A.R.E.T. or CARET ).

Issac’s personal accounting ( further below ) reports the aforementioned work was conducted within what first appeared as only an upscale industrial office complex ‘building’, presumably located in the State of California County of Santa Clara.

Issac describes his facility adjacencies being multi-compartmentalized ‘individual government contractor offices’ – believed assigned to various sensitive tasks for the United States government – whereupon, amongst other compartmentalized secret characteristics ( never mentioned the facility being a ’self-sealing building’ ), were five ( 5 ) underground floors hidden.

Five ( 5 ) stories down, sub-surface levels – not reported by Issac – but easily ascertained must have included:

– One [ 1 ] underground level dedicated parking for ‘secondary staff’ and/or ‘special visitors ( e.g. military officials, etc. ) standard passenger vehicles; and,

– Two [ 2 ] underground level dedicated parking for ‘equipment delivery’ trucks; and,

– Three [ 3 ] underground level dedicated parking for ‘militarized troop personnel’ vans and/or buses.

Carefull observation, when combining all the aforementioned, initially bring a ‘few new questions’ followed by a few ‘remote suppositions’ ( immediately below ):

1. Could Issac’s seemingly ‘personal account’ have actually been ‘cleverly ghost written’ for ‘someone else’?

2. Could Issac have actually been a ‘female’?

3. Could the ‘name’ of the author, “Issac,” have actually been derived from the ‘name’ of a ‘male sibling or spouse’?

4. Could Issac’s seemingly ‘personal account’ have actually ‘taken place geographically elsewhere’?

“Issac” ‘reports’ ( further below ) begin ‘personal accountings’ by ‘laying a foundation scene’ surrounding the State of California County of Santa Clara “Silicon Valley” industrial technology history. Issac then simply includes a report ‘cover page’ entitled, “Palo Alto CARET Laboratory” so, for all intents and purposes readers may ‘then instantly gravitate with the assumption’ that Issac’s ‘personal account took place in Palo Alto, California’, but then “Issac” mentions – but does not detail – only very few ‘building characteristics’ and uses the most ‘general of terms’. Might Issac have ‘purposely laid such a foundation’ after ‘altering the true facility name on he report’ to only be known as the “Palo Alto CARET Laboratory” or “PACL” when the ‘building’ may have actually been ‘remotely located elsewhere’ albeit within or under a ‘temporary U.S. government contract project’ and/or ‘adjunct’ of yet another larger organization.

Was Issac’s ‘reported building’ just a stand-alone upscale city street-side industrial office building made of normal iron re-enforced concrete / cement walled tilt-up construction?

At the time, of Issac’s personal account’, the former ROCKWELL SCIENCE CENTER PALO ALTO LABORATORY ( 444 High Street, Suite #400, Palo Alto, California 94301 ) existed near a plethora of ‘other such organization buildings’ performing secret-sensitive work in the Silicon Valley area of northern California.

Plenty of such ‘remotely located buildings’ exist.

To name a ‘few’, are ‘buildings remotely situated’ at the U.S. National Laboratory in Los Alamos, New Mexico and although ‘such buildings and private contractors are geographically situated there’, funding secrets are hidden under ‘administrative domain auspices’ of the ‘University of California’.

But where do “Issac’s” reported ‘armed military personnel’ easily appear from in such a ‘building’?

Other ‘remotely situated buildings’ also exist – under U.S. government contract to private companies – on military reservations such as the United States Air Force Research Laboratory ( ARL ) that oversees “PHILIPS Laboratory” secret-sensitive work performed and tested ‘near but not within’ Kirtland Air Force Base, New Mexico but secretly hidden on that huge ‘reservation’.

[ PHOTO ( above): PHILIPS Laboratory at Kirkland Air Force Base, New Mexico ( USA ) NOTE: click to enlarge photo details. ]

In southern California, Edwards Air Force Base reservation holds unique offerings, amongst other secrets, where after a vehicle passes the ‘entrance sign’ it must continue to be driven an additional 20-miles further before even reaching the ‘main gate’ to gain ‘official admittance’ but with ‘further restricted movement’, whereupon scattered – all around that ‘reservation’ – are a plethora of ‘remotely situated buildings’ under ‘use’ by ‘private business U.S. government contract holders performing, amongst other things, U.S. government secret-sensitive work within a complex of buildings such as  those seen inside Area 51 ( also known as ) the Lazy G Ranch ( also known as ) The Ranch ( Nevada, USA ).

[ PHOTO ( above): Area 51 (aka) Lazy G Ranch (aka) The Ranch ‘main gate’ ( Nevada, USA ) circa 1970s. NOTE: click to enlarge photo details. ]

[ PHOTO ( above): Area 51 (aka) Lazy G Ranch (aka) The Ranch ( Nevada, USA ) circa 1970s. NOTE: click to enlarge photo details. ]

But even Area 51 (aka) the Lazy G Ranch (aka) The Ranch located in the Nevada desert cannot be compared ( here ) to its secret-sensitive  ’sister reservation’ that is only known as a ’proving ground’ someone named ”Dugway” ( Utah, USA ).

In  1968, the U.S. Navy had private contractors build its secret-sensitive China Lake Naval Weapons Station ( near Trona, California ) whereon that ‘reservation’ holds one ( 1 ) building, with eight ( 8 ) subterrainean story floor levels, that is stuck out in the middle of the southern California desert. If the U.S. Navy is a military ship sailing and aircraft flying defense organization, what is it doingwith an 8-story subterrainean building in the middle of a desert?

From within Issac’s given parameter basics describing the reported ‘upscale city industrial office building’ complex – with five ( 5 ) subterrainian stories – the closest resemblance ‘within the State of California County of Santa Clara Silicon Valley area’ that was the  ’unknown building predecessor’ of what later became known as the ‘first privately-owned and operated business’ belonging to the U.S. Central Intelligence Agency ( CIA ) named QIC ( believed known as ) QUANTUM INTEFACE CENTER ( formerly known as ) IN-Q-IT CORPORATION ( formerly known as ) IN-Q-TEL ( affectionately nicknamed ) CIA-IN-Q-TEL where the CIA business ’special technology’ research and development ( R&D ) was performed – although ‘never fully reported’ about – on ‘applications’ for what would later also be known as ” Commercial Off The Shelf ” ( C.O.T.S. / COTS ) product development of secret-sensitive technologies – ‘products’ in-essence, there would accumulate plenty of, for later distribution –  to be eventually traded for ‘other valuable considerations’ ( only very little press coverage reported it, as “… products to be sold to …”  ) ‘in-exchange’ for which a few ‘private companies’ ( e.g. ‘foreign based company’ PHILIPS, and a few select others ) could possibly offer ‘in-exchange’ for what they ‘could’ or ‘were already providing’ under U.S. government contract(s) that could ‘then’  secretly return U.S. Congress ‘budget approved’ U.S. Department of the Treasury funds by re-routing or mirroring bank wire transferred monies back into the U.S. Central Intelligence Agency ( CIA ) private business that could then re-route those monies as deemed fit secretly into yet other out-of U.S. Congressional scrutinized intelligence projects and programs.

But could all this ‘really happen’?

The webpage links ( above ) show who was initially put in-charge and what senior executives were selectively chosen from key private industries that led the private U.S. Central Intelligence Agency business so, it really should come as no surprise to a few understanding mechanics behind international stock market trading and international bank wire transfers between domestic and foreign operations of the United States Federal Reserve System.

[ PHOTO ( above ) : U.S. Central Intelligence Agency ( CIA ) business IN-Q-TEL CORPORATION logo. ]

A few such secret-sensitive self-sealing rad-hard ( anti-radiation hardening concrete / cement via ‘gamma radiation saturation’ ) buildings reposturate – ‘prior to the onslaught of’ a U.S. national emergency – via remote triggering such buildings to submerge their entire mass into underground special covering multi-story holes dug in the ground beneath them.

So, did Issac’s ‘reported building’ have such ‘additional capacities’ or ‘more’?

One might consider such to be more distinct possibilities based on what the “CARET” report entailed and according to “Issac’s” ‘personal accountings’ surrounding such.

Courtesy: Unwanted Publicity Information Group

====

My Experience With The CARET Program And Extra-Terrestrial Technology

by, Isaac [ alias moniker used by the original author ]

June 2007

This letter is part of a package I’ve assembled for Coast to Coast AM [ a nightly broadcast radio station located in the United States of America ] to distribute to its audience. It is a companion to numerous ‘document’ and ‘photo’ scans and should not be separated from them.

You can call me Isaac, an alias I’ve chosen as a simple measure of protection while I release what would be called tremendously ‘sensitive information’ even by todays standards.

‘Sensitive’ is not necessarily synonymous with ‘dangerous’, though, which is why my conscience is clear as I offer this material up for the public.

My government [ United States of America ] has its reasons for its continual secrecy, and I sympathize with many of them, but the truth is that I’m getting old and I’m not interested in meeting my maker one day with any more baggage than necessary.

Furthermore, I put a little more faith in humanity than my former bosses do, and I think that a release of at least some of this information could help a lot ‘more’ than it could ‘hurt’, especially in today’s world.

I should be clear before I begin, as a final note:

I am not interested in making myself vulnerable to the consequences of betraying the trust of my superiors and will not divulge any personal information that could determine my identity.

However my intent is not to deceive, so ‘information that I think is too risky to share’ will be simply ‘left out’ rather than obfuscated in some way ( aside from my alias, which I freely admit is not my real name ).

I would estimate that with the information contained in this letter, I could be narrowed down to one [ 1 ] of maybe 30 to 50 people at best, so I feel reasonably secure.

Some Explanation for the Recent Sightings –

For many years I’ve occasionally considered the release of at least some of the material I possess, but the recent wave of photos and sightings has prompted me to cut to the chase and do so now.

I should first be clear that I’m not directly familiar with any of the crafts seen in the photos in their entirety. I’ve never seen them in a hangar or worked on them myself or seen aliens zipping around in them. However, I have worked with and seen many of the parts visible in these crafts, some of which can be seen in the Q3-85 Inventory Review scan found at the top of this page.

More importantly though, I’m very familiar with the ‘language’ on their [ craft(s) ] ‘undersides’ [ under bellies ] seen clearly in photos by Chad, Rajman, and – ‘another form’ – in the Big Basin photos.

One question I can answer – for sure – is why they are suddenly here.

These crafts have probably existed – in their current form – for decades, and I can say – for sure – that the technology behind [ abut ] them has existed for decades before that.

The ‘language’, in fact – I’ll explain shortly why I keep putting that in quotes – was the subject of ‘my work’ in years past. I’ll cover ‘that’ as well.

The reason they [ extraterrestrial craft(s) ] are suddenly ‘visible’, however is ‘another matter’ entirely.

These crafts – assuming they’re anything like the hardware I worked with in the 1980s ( assuming they’re better, in fact ) – are equipped with technology that enables invisibility. That ‘ability’ can be controlled both ‘on board’ the craft, and ‘remotely’.

However, what’s important in this case is that this ‘invisibility’ can also be ‘disrupted’ by ‘other technology’. Think of it like ‘radar jamming’.

I would bet my life savings ( since I know this has happened before ) that these craft are ‘becoming visible’ and then ‘returning to invisibility’ arbitrarily – probably unintentionally – and undoubtedly for only ‘short periods’ due to the ‘activity of a kind’ of ‘disrupting technology’ [ sonic flocculation ] being ‘set-off elsewhere’ but ‘near-by’.

I’m especially sure of this in the case of the Big Basin sightings where the witnesses themselves reported seeing the craft just ‘appear’ and ‘disappear’.

This is especially likely because of the way the witness described one [ 1 ] of the appearances being only a ‘momentary flicker’, which is consistent with the ‘unintentional’, ‘intermittent triggering’ of such a ‘device’.

It’s no surprise that these sightings are all taking place in ‘California’ ( USA ), and especially the Saratoga Bay / South Bay area.

Not far from Saratoga is Mountain View, California ( USA ) / Sunnyvale, California ( USA ) home to Moffett Field [ formerly, a United States Army Air Corps military airfield / United States Air Force Base ( USAFB ) ] and the [ National Aeronautic Space Administration ] NASA Ames Research center.

Again, I’d be willing to bet – just about anything – that the device capable of hijacking the cloaking of these nearby craft was inadvertently triggered, probably during some kind of experiment, at the exact moment they were being seen.

Miles away, in Big Basin, the witnesses were in the right place – at the right time – and saw the results of this disruption with their own eyes.

God knows what else was suddenly appearing in the skies at that moment, and who else may have seen it.

I’ve had some direct contact with this device, or at least a device capable of the same thing, and this kind of mistake is not unprecedented.

I am personally aware of at least one [ 1 ] other incident in which this kind of technology was accidentally set off, resulting in the sudden visibility of normally invisible things.

The only difference is that these days, cameras are alot more common!

The technology itself is ‘not’ ours, or at least it was ‘not in the 1980s.

Much like the technology, in these crafts themselves, the device capable of remotely hijacking vehicle clacking comes from a non-human source too.

Why we were given this technology has never been clear to me, but it’s responsible for a lot.

Our having access to this kind of device, along with our occasionally hap-hazard experimentation on them, has lead to everything from cloaking malfunctions like this to full-blown crashes.

I can assure you that most ( and in my opinion all) incidents of UFO crashes or that kind of thing had more to do with our meddling with extremely powerful technology at an inopportune time than it did mechanical failure on their part.

Trust me, those things don’t fail unless something even more powerful than them makes them fail ( intentionally or not ). Think of it like a stray bullet. You can be hit by one at any time, without warning, and even the shooter did ‘not’ intend to hit you.

I can assure you heads are rolling over this as well.

If anyone notices a brilliant but sloppy ‘physicist’ patrolling the streets of Baghdad [ Iraq ] in the next couple weeks, I’d be willing to guess how he got there. ( I kid – of course – as I certainly hope that has ‘not’ actually happened in this case ).

I would now like to explain how it is that I know this.

The CARET Program –

My story begins the same as it did for many of my co-workers, with graduate and post-graduate work at university in electrical engineering. And I had always been interested in computer science, which was a very new field at the time, and my interest piqued with my first exposure to a Tixo during grad school.

In the years following school I took a scenic route through the tech industry and worked for the kinds of companies you would expect, until I was offered a job at the United States Department of Defense [ DoD ] and things took a very different turn.

My time at the DoD [ United States Department of Defense ] was mostly uneventful but I was there for quite a while. I apparently proved myself to be reasonably intelligent and loyal.

By 1984 these qualities along with my technical background made me a likely candidate for a new program they were recruiting for called “CARET.”

Before I explain, what CARET was, I should back up a little.

By 1984, Silicon Valley had been a juggernaut of technology for decades. In the less than 40-years since the appearance of Shockley’s transistor, this part of the world had already produced a multi billion dollar computer industry and made technological strides that were unprecedented in other fields – from hypertext and online collaboration in 1968 to the Alto in 1973.

Private industry in Silicon Valley was responsible for some of the most incredible technological leaps in history and this fact did not go unnoticed by the US government and military.

I don’t claim to have any special knowledge about Roswell [ New Mexico, USA incident believed to be an extraterrestrial flying object ( UFO ) crash ] or any of the other alleged early UFO events, but I do know that whatever the exact origin, the ‘military’ was hard at work trying to understand and use the ‘extraterrestrial artifacts’ it had in its ‘possession’.

While there had been a great deal of progress overall, things were not moving as quickly as some would have liked.

So, in 1984, the CARET program was created with the aim of harnessing the abilities of private industry in silicon valley and applying it to the ongoing task of understanding extra-terrestrial technology.

One of the best examples of the power of the tech sector was XEROX PARC, a research center in Palo Alto, California [ USA ].

XPARC was responsible for some of the major milestones in the history of computing.

While I never had the privilege of working there [ XEROX PARC ( Palo Alto, California, USA ], myself, I ‘did’ know many of the people who ‘did’ and I can say that they were among the brightest engineers I ever knew.

XPARC served as one [ 1 ] of the models for the CARET program’s first incarnation, a facility called the PALO ALTO CARET LABORATORY ( PACL ) – lovingly pronounced, “packle” during my time there.

This [ Palo Alto CARET Laboratory ] was where [ Palo Alto, California, USA ] I worked, along with numerous other civilians, under the auspices of military brass who were eager to find out how the tech sector made so much progress so quickly.

My time at the DoD [ U.S. Department Of Defense ] was a major factor behind why I was chosen, and in fact about 30+ [ 30 or more ] others – who were hired around the same time – had also been at the Department [ U.S. Department Of Defense ] about as long but this was not the case for everyone.

A couple of my co-workers were plucked right from places like IBM [ INTERNATIONAL BUSINESS MACHINES ] and, at least two [ 2 ] of them came from XPARC [ XEROX PARC ( Palo Alto, California, USA ] itself.

My DoD [ U.S. Department Of Defense ] experience did make me more eligable [ eligible ] for positions of management, however, which is how I have so much of this material [ documents, photos, etc. ] in my possession to begin with.

So, in other words, civilians ( like myself ) who had – at most – some decent experience working for the DoD [ U.S. Department Of Defense ] but no actual military training or involvement were suddenly finding ourselves in the same room as highly classified extra-terrestrial technology.

Of course they spent about 2-months briefing us all before we saw or did anything, and did their best to convince us that if we ever leaked a single detail about what we were being told, they’d do everything short of digging up our ancestors and putting a few slugs in them too – just for good measure.

It seemed like there was an armed guard in every corner of every room.

I’d [ I had ] worked under some pretty hefty NDAs [ National Defense Administrations ] in my time but this was so far out of my depth. I didn’t think I was going to last 2-weeks in an environment like that. But amazingly things got off to a good start.

They wanted us, plain and simple, and our industry – had shown itself to be so good at what it did – that they were just about ready to give us carte blanche.

Of course, nothing with the military is ever that simple, and as is often the case they wanted to have their cake and eat it too. What I mean by this is that despite their interest in picking our brains and learning whatever they could from our way of doing things, they still wanted to do it ‘their way’ often enough to frustrate us. At this point I’m going to gloss over the emotional side of this experience, because this letter isn’t intended to be a memoir, but I will say that there’s almost no way to describe the impact this kind of revelation has on your mind.

There are very few moments in life in which your entire world view is turned forever upside down, but this was one of them.

I still remember that turning point – during the briefing – when I realized what he’d just told us, and that I hadn’t heard him wrong, and that it wasn’t some kind of joke.

In retrospect, the whole thing feels like it was in slow motion, from that ‘slight pause’ he took – just before the term “extra-terrestrial” came out for the first time – to the way the room itself seemed to go off kilter as we collectively tried to grasp what was being said.

My reflex kept jumping back and forth between trying to look at the speaker, to understand him better, and looking at everyone else around me, to make sure I wasn’t the only one that was hearing this.

At the risk of sounding melodramatic, it’s a lot like a child learning his parents are divorcing. I never experienced that myself, but a very close friend of mine did when were boys, and he confided in me a great deal about what the experience felt like. A lot of what he said would aptly describe what I was feeling in that room.

Here was a ‘trusted authority figure’ telling you something that you just don’t feel ready for, and putting a burden on your mind that you don’t necessarily want to carry. The moment that first word comes out, all you can think about it is, what it was like only ‘seconds ago’, and knowing that life is never going to be as simple as it was ‘then’.

After all that time at the DoD [ U.S. Department Of Defense ], I thought I at least had some idea of what was going on in the world, but I’d never heard so much as a peep about this.

Maybe one day I’ll write more on this aspect, because it’s the kind of thing I really would like to get off my chest, but for now I’ll digress.

Unlike traditional research in this area, we weren’t working on new toys for the air force.

For numerous reasons, the CARET people decided to aim its efforts at ‘commercial applications’ rather than ‘military’ ones.

They basically wanted us to turn these ‘artifacts’ into something they could ‘patent’ and ‘sell’.

One of CARET’s most ‘appealing promises’ was the revenue generated by these product-ready technologies, which could be funneled right back into ‘black projects’. Working with a ‘commercial application’ in-mind was also yet another way to keep us in a familiar mind state. Developing technology for the military is very different than doing so for the ‘commercial sector’, and not having to worry about the difference was another way that CARET was very much ‘like private industry’.

CARET shined, in the way it let us work the way we were used to working. They wanted to recreate as much of the environment we were used to as they could without compromising issues like security. That meant we got ‘free reign to set up’ our own ‘workflow’, ‘internal management structure’, ‘style manuals’, ‘documentation’, and the like. They wanted this to look and ‘feel like private industry’, ‘not the military’. They ‘knew’ this was ‘how to get the best work out of us’, and they were right.

But things didn’t go as smoothly when it came to matters like access to classified information.

They were exposing what is probably their single biggest secret to a group of people who had never even been through basic training and it was obvious that the gravity of this decision was never far from their minds.

We started the program with a small set of ‘extra-terrestrial artifacts’ along with ‘fairly elaborate briefings’ on ‘each’ as well as ‘access to a modest amount of what research had already been completed’.

It wasn’t long before we realized ‘we needed more’ though, and getting them to provide even the smallest amount of new material was like pulling teeth.

CARET stood for “Commercial Applications Research for Extra-Terrestrial Technology”, but we often joked that it should have stood for “Civilians Are Rarely Ever Trusted.”

PACL [ PALO ALTO CARET LABORATORY ] was located in Palo Alto [ California, USA ], but unlike XPARC [ XEROX XPARC ( Palo Alto, California, USA ], it wasn’t at the end of a long road in the middle of a big complex surrounded by rolling hills and trees.

PACL was hidden in an ‘office complex’ – owned entirely by the military but ‘made to look like an unassuming tech company’.

From the street, all you could see was what appeared to be a normal ‘parking lot’ with a ‘gate’ and a ‘guard [ security ] booth’, and a 1-story building inside with a ‘fictitious name’ and ‘[ fictitious ] logo’.

What was ‘not visible’ – from the street – was that ‘behind’ the very ‘first set of doors’ was enough ‘armed guards’ to invade Poland, plus five [ 5 ] additional underground stories [ levels ].

They wanted to be as close as possible to the kinds of people they were looking to hire, and be able to bring them in with a minimum of fuss.

Inside, we had everything we needed. State of the art hardware and a staff of over 200 computer scientists, electrical engineers, mechanical engineers, physicists and mathematicians.

Most of us were civilians, as I’ve said, but some were military, a few of them had been working on this technology already.

Of course, you were never far from the barrel of a ‘machine gun’ – even ‘inside the labs’ themselves ( something many of us never got used to ) – and ‘bi-weekly tours’ were made by ‘military brass’ to ensure that not a single detail was out of line. Most of us underwent extensive searches on our way into and out of the building. There it was, probably the biggest secret in the world, in a bunch of parts spread out on laboratory tables in the middle of Palo Alto so you can imagine their concern.

One ‘downside’ to CARET was that it was ‘not’ as ‘well-connected’ as ‘other operations’ undoubtedly ‘were’.

I ‘never got to see’ any ‘actual extra-terrestrials’ ( not even photos ), and in fact ‘never even saw’ one [ 1 ] of their ‘complete vehicles’ – ’99% of what I saw’ was ‘related to the work at-hand’, all of which was conducted within a very narrow context on ‘individual artifacts only’. The remaining ’1% came from people’ I met through the program, many of which ‘working more closely’ with “the good stuff” or ‘had [ worked with ] in the past’.

In fact, what was especially amusing about the whole affair was the way that our ‘military management’ almost ‘tried to act’ as if the ‘technology’ – we were essentially ‘reverse engineering’ – was ‘not extra-terrestrial’ at all.

Aside from the word “extra-terrestrial,” itself, we rarely heard any other terms like “alien” or “UFO” or “outer space” or anything. ‘Those aspects’ were ‘only mentioned briefly’ when absolutely ‘necessary to explain something’.

In many cases it was necessary to ‘differentiate’ between the different ‘races’ and ‘their’ respective ‘technology’, and they did ‘not’ even use the word “races.” They were referred to simply as different “sources.”

The Technology –

A lot of the technology we worked on was what you would expect, namely ‘anti-gravity’. Most of the ‘researchers’ ( on the staff ) – with ‘backgrounds’ in ‘propulsion’ and ‘rocketry’ – were ‘military’ men, but the ‘technology’ we were dealing with was so ‘out of this world’ that it didn’t really matter all that much what your background was because none of it applied.

All we could hope to do was use the ‘vocabulary’ of our respective fields as a way ‘to model’ the extremely bizarre ‘new concepts’ we were very slowly ‘beginning to understand’ as best we could.

A ‘rocket engineer’ doesn’t usually rub elbows much with a ‘computer scientist’, but inside PACL [ PALO ALTO CARET LABORATORY ( Palo Alto, California, USA ) ], we were all ‘equally mystified’ and were ready to ‘entertain any and all ideas’.

The ‘physicists’ made the most headway, initially because out of all of our skills, their’s ‘overlapped the most’ with the ‘concepts behind this technology’ ( although that isn’t saying much! ). Once they [ physicists ] got the ball rolling though, we began to find that many of the ‘concepts found in computer science’ were applicable as well, albeit in very vague ways.

While I didn’t do a lot of work with the antigrav [ anti-gravity ] ‘hardware’, myself, I was occasionally involved in the ‘assessment’ of ‘how’ that ‘technology’ was meant to ‘interface’ with its ‘user’.

The antigrav [ anti-gravity ] was amazing, of course, as were the ‘advances’ we were making with ‘materials engineering’ and so on.

But what interested me most then, and still amazes me most to this day, was something completely unrelated.

In fact, it was this ‘technology’ that immediately jumped out at me when I ‘saw’ the Chad and Rajman ‘photos’, and even more-so in the ‘Big Basin photos’.

The “Language” –

I put the word Language in quotes because calling what I am about to describe a “language” is a misnomer, although it is an easy mistake to make.

Their [ extraterrestrial ] ‘hardware’ was ‘not’ operated in quite the same way as ours.

In our technology, even today, we have a combination of ‘hardware and software’ running almost everything on the planet.

Software is more abstract than hardware, but ultimately it needs hardware to run it.

In other words, there’s no way to write a computer program on a piece of paper, set that piece of paper on a table or something, and expect it to actually do something.

The most powerful ‘code’ in the world still ‘does not actually do anything’ until a piece of ‘hardware interprets it [ software ]‘ and ‘translates’ its ‘commands’ into ‘actions’.

But ‘their [ extraterestrial ] technology’ is ‘different’.

It really did operate like the magical piece of paper sitting on a table, in a manner of speaking.

They had something akin to a ‘language’ that could quite literally ‘execute’ itself – at least in the ‘presence’ of a very specific type of ‘field’ [ ‘field presence execution’ ].

The ‘language’, a term I am still using very loosely, is a ‘system’ of ‘symbols’ ( which does admittedly very much resemble a written language ) along with ‘geometric forms’ and ‘[ geometric ] patterns’ that fit together [ ‘interlocking’ ] to ‘form diagrams’ that are themselves ‘functional’.

Once they [ interlocking symbolic format diagrams ] are ‘drawn’ – so to speak – on a suitable ‘surface’ made of a suitable ‘material’ and in the ‘presence’ of a certain type of ‘field’, they immediately begin performing the desired tasks. It really did seem like magic to us, even after we began to understand the principles behind it.

I worked with these ‘symbols’ – more than anything [ else ] – during my time at PACL [ PALO ALTO CARET LABORATORY ( Palo Alto, California, USA ) ], and ‘recognized them’ the moment I saw them in the ‘photos’.

They appear in a very simple ‘form’ on Chad’s ‘craft’, but appear in the ‘more complex diagram form’ on the ‘underside’ of the ‘Big Basin craft’ as well.

Both are unmistakable, even at the small size of the Big Basin photos.

An example of a diagram in the style of the Big Basin craft is included with this in a series of scanned pages from the [ mistitled ] “Linguistic Analysis Primer”.

We needed a copy of that diagram to be utterly precise, and it took about a [ one – 1 ] month [ 30-days ] for a team of six [ 6 ] to ‘copy’ that ‘diagram’ into our drafting program!

Explaining everything I learned about this technology would fill up several volumes, but I will do my best to explain at least ‘some’ of the ‘concepts’ – as long as I am taking the time to write all this down.

First of all, you wouldn’t open-up their [ extraterrestrial ] ‘hardware’ to find a CPU here, and a data bus there, and some kind of memory over there.

Their [ extraterrestrial ] ‘hardware’ appeared to be ‘perfectly solid’, and consistent, in terms of ‘material’ – from one side to the other. Like a rock or a hunk of metal.

But upon [ much ] closer inspection, we began to learn that it was actually one [ 1 ] big ‘holographic computational substrate’ – each “computational element” ( essentially, individual ‘particles’ ) can ‘function independently’ but are ‘designed to function together’ in tremendously ‘large clusters’.

I say its ‘holographic’ because you can ‘divide it up into the smallest chunks’ you want and still find a scaled-down but complete representation of the whole system.

They produce a ‘non-linear computational output’ when ‘grouped’.

So four [ 4 ] elements, working together, is actually more than four [ 4 ] times ‘more powerful than’ one [ 1 ].

Most of the internal “matter” in their [ extraterrestrial ] ‘crafts’, usually everything – except the outermost housing – is actually ‘this [ extraterestrial] substrate’ and can ‘contribute to computation’ at ‘any time’ and in ‘any state’.

The ‘shape’ of these [ extraterrestrial ] “chunks” of ‘substrate’ also had a profound ‘effect’ on its [ extraterrestrial ] ‘functionality’, and often served as a “shortcut” to achieve a goal that might ‘otherwise’ be more ‘complex’.

So back to the language.

The language is actually a “functional blueprint.”

The ‘forms’ of the ‘shapes’, ‘symbols’ and ‘arrangements’ thereof is itself ‘functional’.

What makes it all especially ‘difficult to grasp’ is that every ‘element’ of each “diagram” is ‘dependant on’ and ‘related to’ every ‘other element’ [ elements ], which means ‘no single detail’ can be ‘created’, ‘removed’ or ‘modified’ independently.

Humans like written language because each element of the language can be understood on its own, and from this, complex expressions can be built.

However, their “language” is entirely ‘context sensitive’, which means that ‘a given symbol’ could mean as little as a ’1-bit flag’ in ‘one [ 1 ] context’, or – quite literally – contain the entire human genome or a galaxy star map in another.

The ability for a single, small symbol to contain, not just represent, tremendous amounts of data is another counter-intuitive aspect of this ‘concept’.

We quickly realized that even ‘working in groups’ of ten [ 10 ] or more on the ‘simplest of diagrams’, we found it virtually impossible to get anything done. As each new feature was added, the ‘complexity of the diagram exponentially grew’ to unmanageable proportions.

For this reason we began to develop computer-based systems to manage these details and achieved some success, although again we found that a threshold was quickly reached beyond which even the supercomputers of the day were unable to keep up.

Word was that the ‘extraterrestrials could design’ these ‘diagrams’ as ‘quickly’, and [ as ] easily as a human programmer could write a [ computer language ] Fortran program.

It’s humbling to think that even a ‘network of supercomputers’ was ‘not’ able to ‘duplicate’ what they could do in their [ extraterrestrial ] own heads.

Our entire system of language is based on the idea of assigning meaning to symbols.

Their [ extraterrestrial ] technology, however, somehow ‘merges’ the ‘symbol’ and the ‘meaning’, so a subjective audience is not needed.

You can put whatever meaning you want on the symbols, but their behavior and functionality will not change, any more than a transistor will function differently if you give it another name.

Here’s an example of how complex the process is.

Imagine I ask you to incrementally add random words to a list such that no two [ 2 ] words use any of the same letters, and you must perform this exercise entirely in your head, so you can’t rely on a computer or even a pen and paper.

If the first [ 1st ] in the list was, say, “fox”, the second [ 2nd ] item excludes all words with the letters F, O and X.

If the next word you choose is “tree”, then the third [ 3rd ] word in the list can’t have the letters F, O, X, T, R, or E in it.

As you can imagine, coming up with even a third [ 3rd ] word might start to get just a bit tricky, especially since you can’t easily visualize the excluded letters by writing down the words.

By the time you get to the fourth [ 4th ], fifth [ 5th ] and sixth [ 6th ] words, the problem has spiraled out of control.

Now imagine trying to add the billionth [ 1,000,000,000 ] word to the list ( imagine also that we’re working with an ‘infinite alphabet’ so you don’t run out of letters ) and you can imagine how difficult it is for even a computer to keep up.

Needless to say, writing this kind of thing “by hand” is orders of magnitude beyond the capabilities of the brain.

My background lent itself well to this kind of work though. I’d spent years ‘writing code’ and ‘designing’ both ‘analog’ and ‘digital’ circuits, a process that at least visually resembled these diagrams in some way.

I also had a personal affinity for ‘combinatorics’, which served me well as I helped with the ‘design of software’ running on ‘supercomputers’ that could juggle the often trillions [ 1,000,000,000,000 ] of rules necessary to create a ‘valid diagram’ of any ‘reasonable complexity’.

This overlapped quite a bit with ‘compiler theory’ as well, a subject I always found fascinating, and in particular ‘compiler optimization’, a field that was ‘not’ half [ 50% ] of what it is today back then.

A running joke among the linguistics team was that Big-O notation couldn’t adequately describe the scale of the task, so we’d substitute other words for “big”.

By the time I left I remember the consensus was “Astronomical-O” finally did it justice.

Like I said, I could go on for hours about this subject, and would love to write at least an introductory book on the subject if it was not – still completely – ‘classified’, but that’s not the point of this letter so I’ll try to get back on track.

The last thing I’d like to discuss is how I got copies of this material, what else I have in my possession, and what I plan to do with it in the future.

My Collection –

I worked at PACL [ PALO ALTO CARET LABORATORY ( Palo Alto, California, USA ) ] from 1984 to 1987, by which time I was utterly burned out.

The sheer volume of details to keep in mind while working with the diagrams was enough to challenge anyone’s sanity, and I was really at the end of my rope with the military attitude towards our “need to know”. Our ability to get work done was constantly hampered by their reluctance to provide us with the necessary information, and I was tired of bureaucracy getting in the way of research and development [ R&D ].

I left somewhere in the middle of a 3-month bell curve in which about a quarter of the entire PACL [ PALO ALTO CARET LABORATORY ( Palo Alto, California, USA ) ] staff left for similar reasons.

I was also starting to disagree with the direction the leadership wanted to take as far as the subject of extra-terrestrials went.

I always felt that at least some form of disclosure would be beneficial, but as a lowly CARET ‘engineer’ I wasn’t exactly in the position to call shots.

The truth is, our management didn’t even want us discussing – even among ourselves – non-technical aspects of this subject ( such as ethical or philosophical issues, as they felt it was enough of a breach of security to let civilians like us anywhere near this kind of thing in the first place.

So, about 3-months before I resigned ( which was about 8-months before I was really out – since you don’t just walk out of a job like that with a 2-week notice ) – I decided to start taking advantage [ remove PACL work documents, etc. ] of my ‘position’ [ a ‘situational position’ wherein his ( Issac ) PACL ‘security inspections’ on his ( Issac ) ‘person’ became lessened or ‘weak’ upon his ( Issac ) ‘departures from the PACL facility’ ].

As I mentioned earlier, my DoD [ United States Department Of Defense ] experience got me into an internal management role sooner than some of my colleagues, and after about a [ one – 1 ] year of that kind of status, the outgoing [ departing ] searches [ security inspections ] each night became slightly less rigorous.

Normally, we were to empty out any containers, bags or briefcases, then remove our shirt and shoes and submit to a kind of frisking. Work was never allowed to go home with you, no matter who you were.

For me, though, the briefcase search [ secuity inspection ] was [ had become ] eventually enough [ all that the security inspection became on him ( Issac ) ].

Even before I [ Issac ] actually decided to do it [ remove PACL work documents, etc. ], I was sure that I would be able to sneak certain materials out with me.

I wanted to do this [ remove PACL work documents, photos, etc. ] because I knew the day would come when I would want to write something like this, and I knew I’d regret it until the day I died if I didn’t at least leave the possibility open to do so.

So I started photocopying [ PALO ALTO CARET LABORATORY ( Palo Alto, California, USA ) ] documents and reports by the dozen.

I had then [ 3-months before he ( Issac ) resigned from PALO ALTO CARET LABORATORY ( Palo Alto, California, USA ) ] put the papers [ documents, etc. ] under my shirt around my lower back, tucked enough into my belt to ensure they wouldn’t fall out.

I could do this [ ‘physically able to do’ but ‘not authorized to do’ ] in any one of a few ‘short windowless hallways’ on some of the ‘lower floors’, which were among the few places that did ‘not’ have an ‘armed guard watching’ my every move.

I would walk in one end [ of the ‘short windowless hallways’ ] with a stack of papers large enough that when I came out the other end [ of the ‘short windowless hallways’ ] with some of them [ documents, photos, etc. ] in my shirt – there would ‘not’ be a visible [ observational ] difference in what I was holding.

You absolutely cannot be too careful if you’re going to pull a stunt like this.

As long as I walked carefully they would ‘not’ make a crinkling noise [ paper flex rustling upon movement ].

In fact, the more papers I took, the less noise they made, since they were ‘not’ as flimsy [ resistant to flex upon restricted movement ] that way.

I’d often take upwards of 10-pages up to 20-pages at once [ each time ].

By the time I was done, I had made out with [ unlawfully removed documents, photos, etc. away from PALO ALTO CARET LABORATORY ( Palo Alto, California, USA ) ] ‘hundreds’ [ 200+ or more ] of ‘photocopies’, as well as a few ‘originals’ and a ‘large collection’ of ‘original photographs’.

With this ‘initial letter’, I have attached high resolution scans of the following:

– One [ 1 ] page is from a [ PALO ALTO CARET LABORATORY ( Palo Alto, California, USA ) ] “inventory review” with a ‘photo’ – appears to depict one ( 1 ) of the ‘parts’ found in the Rajman sighting and ‘parts’ very similar to the Big Basin craft;

– The first [ 1st ] nine ( 9 ) pages of one ( 1 ) of our [ PALO ALTO CARET LABORATORY ( Palo Alto, California, USA ) ] ‘quarterly’ research ‘reports’;

– Scans of the ‘original photographs’ used ‘in that report’ – since the ‘photocopies obscure’ most of the ‘details’; and

– Five [ 5 ] pages from a ‘report’ on our [ PALO ALTO CARET LABORATORY ( Palo Alto, California, USA ) ] ongoing analysis of the “language” ( inappropriately titled “linguistic analysis” ) depicting the kind of diagram – just barely visible on the underside of the Big Basin craft.

This material is the most, ‘relevant’ and ‘explanatory’, I could find on ‘short notice’.

Now that these are up [ on the internet ], ‘if’ I decide to release more in the future, I’ll be able to take my time and better search this rather large collection of mine that I’ve sadly never organized.

I’m not sure what I’ll be doing with the rest of the collection in the future.

I suppose I’ll wait and see how this all plays out, and then play it by ear.

There are certainly risks involved in what I’m doing, and if I were to actually be identified and caught, there could be rather serious consequences.

However, I’ve taken the proper steps to ensure a ‘reasonable level of anonymity’ and am quite secure in the fact that the information I’ve so far provided is by ‘no means unique’ among many of the CARET participants [ had access to ].

Besides, part of me has always suspected that the [ United States of America ] government ‘relies on the occasional leak’ – like this – and actually wants them to happen, because it ‘contributes to a steady slow-paced path towards revealing’ the ‘truth’ of this ‘matter’.

Since Leaving CARET –

Like I said, I left PACL in 1987, but have kept in touch with a great many of my friends and co-workers from those days.

Most of us are retired by now, except – of course – for those of us that went-on to get ‘teaching jobs’, but a few of us ‘still hear things’ [ ‘still told of these matters’ ] through the grapevine.

As for CARET itself, I’m not sure what’s become of it.

Whether it’s still known by the same name, I’m quite sure it’s ‘still active’ in ‘some capacity’, although who knows where.

I heard from a number of people that PACL [ PALO ALTO CARET LABORATORY ( Palo Alto, California, USA ) ] closed up shop a few years after I left, but I’ve still yet to get a clear answer on why exactly that happened.

But I’m sure ‘the kind of work we did there’ [ PALO ALTO CARET LABORATORY ( Palo Alto, California, USA ) ] is ‘still going’ strong.

I’ve heard from a lot of friends that there are multiple sites like PACL in Sunnyvale, California ( USA ) and Mountain View, California ( USA ) also disguised to look like ‘unremarkable office space’.

But this is all second-hand information so you can make of it what you will.

Around 2002, or so, I came across Coast to Coast AM [ radio station in the United States of America ] and have been hooked ever since.

I admit, I don’t take most of the [ radio program ] show’s content as anything more than entertainment, but there have been occasions when I could be sure a guest was clearly speaking from experience or a well-informed source.

For me, there’s just something very ‘surreal about hearing all this speculation’ and ‘so-called inside information’ about UFOs [ Unidentified Flying Objects ] ( and the like ) but [ my ( Issac ) ] being ‘personally able to verify’ at least ‘some of it’ as being true or false. It’s [ Coast to Coast AM radio program ( USA ) ] also a ‘nightly’ [ time period, when Coast to Coast AM radio is broadcasted ] reminder of how hectic things were in those days, which helps me enjoy my retirement all the more.

Knowing I’m not part of that crazy world anymore really is something I enjoy on a daily basis, as much as I miss some of it.

Conclusion –

What I’ve shared so far is only a very small portion of what I have, and what I know.

Despite the very sheltered and insulated atmosphere within CARET, I did ultimately learn a great deal from various colleagues, and some of what I learned is truly incredible.

I’d also like to say that for what it’s worth, during my time [ 1983 – 1987 ] there [ PALO ALTO CARET LABORATORY ( Palo Alto, California, USA ) ] I never heard anything about invasions, or abductions, or many of the more frightening topics that often pop up on Coast to Coast AM [ radio program ( USA ) ].

That’s not to say that none of it is true, but in my time working alongside some of the most well-connected people in this field, it never came up.

So at the very least I can say my intent is not to scare anyone.

My view on the extra-terrestrial situation is very much a positive, albiet still highly secretive one.

One thing I can definitely say is that if they wanted us gone, we would have been gone a very, very long time ago, and we wouldn’t even have seen it coming.

Throw out your ideas about a space war or anything silly like that. We’d be capable of fighting back against them about as much as ants could fight back against a stampede of buffalo.

But that’s okay, we are the ‘primitive race’, they [ extraterrestrials ] are the ‘advanced races’, and that’s just the way it is.

The ‘other advanced races let them live through their primitive years’ back in ‘their day’, and there is no reason to think it will be any different for us.

They [ extraterrestrials ] are not in the market for a new planet, and even if they [ extraterrestrials ] were there are way too many planets out there for them [ extraterrestrials ] to care about ours enough to take it by force.

To reiterate my take on the recent sightings, I would guess that experimentation – done in the last couple months – on a device that, among other things, is capable of interfering with various crafts onboard invisibility has resulted in a sudden wave of sightings.

It may ‘not’ explain ‘all’ of the recent events, but like I said, I’d bet my life that ‘is’ exactly what happened at Big Basin – at least – and it’s probably related in some way to the Chad, Rajman and Tahoe [ Lake Tahoe, California / Nevada ( USA ) ] sightings [ of the unidentified flying object ( UFO ) ].

So, despite all the recent fanfare over this, I’d say this does ‘not’ mean much.

Most importantly, they are ‘not suddenly’ “here,” they [ extraterrestrials ] have been here for a long time, but just [ have ] happened to turn ‘intentionally visible’ for brief periods ‘recently’.

Lastly, there are so many people selling books, and DVDs, and doing lectures and all that so, I would like to reiterate the fact that I am ‘not’ here to ‘sell’ anything.

The material I’m sharing is ‘free to distribute’ provided it’s all kept intact and unmodified, and this letter is included.

I tend to question the motives of anyone charging money for their information, and will assure you that I [ Issac ] will never do such a thing.

And in the future, just to cover all the bases, anyone claiming to be ‘me’ [ Issac ] who ‘is’ selling a DVD or book is most certainly ‘not going to be me’ [ Issac ].

Any future releases from me [ Issac ] will come from the e-mail address I’ve used to contact Coast to Coast AM [ USA radio station ], and will be sent to them [ Coast to Coast AM ( USA radio station ) ] only.

I’d like to make this clear as well to ensure that people can be sure that any future information comes from the same source, although I must be clear:

At this time I do not have any future plans for additional information. Time will tell how long I will maintain this policy, but do not expect anything soon.

I’d really like to let this information “settle” for a while and see how it goes.

If I find out I’m getting an IRS [ United States Department of the Treasury, Office of Internal Revenue Service ( IRS ) ] audit tomorrow, then maybe this wasn’t too smart.

Until then, I’m going to take it slow.

I hope this information has been helpful.

– Issac

– –

One of the documents ( in the form of high resolution scans of the original ) uploaded was called “PALO ALTO CARET LABORATORY Q-4 1986 RESEARCH REPORT” – here are some excerpts:

1. OVERVIEW –

This document is intended as a primer for the tentative findings of the Q4 1986 research phase ( referred to herein as “Q-4 1986″ ) at the Palo Alto CARET Laboratory (aka) PACL. In accordance with the CARET program mission statement, the goal of this research has been achieving a greater understanding of extraterrestrial technology within the context of commercial applications and civilian use. Examples of such applications, in no particular order, include transportation, medicine, construction, energy, computing and communication.

The ultimate goal of this research is to provide a core set of advanced technologies in a condition suitable for patent review.

2. EXTRACTION –

The process of converting raw artifacts of extraterrestrial origin to usable, fully-documented human technology is termed extraction. The extraction process ultimately consists of two phases:

First [ 1st ] is the establishment of a complete theoretical and operational understanding of the artifact; and,

Second [ 2nd ] is a distillation of the artifact’s underlying principles into usable, product-oriented technology.

Suggestions of specific product applications on behalf of PACL have been encouraged, but are not considered mandatory or essential.

– –

From: Isaac Subject: Re: “Drones” Date: June 27, 2007

Isaac:

“There are a few misconceptions that I have noticed so far and would like to clear them up, and will also answer your questions:

1) I realize now that I did not make this clear, but I should clarify that I am not responsible for the blacking out of the Q4-86 report. Most of the copies I was able to make came from documents that were already archived, which meant that they had already been censored for use by outside parties that needed access to some, but not all, of CARET’s information. I’m trying to share this information, not hide it, but if I did feel that if a given topic was too sensitive for some reason, I would make it clear that I had personally covered it up and probably try to give a reason why.

2) I do not understand the question about why the diagram would be “formatted for 8.5 x 11″… As I mention in my letter, the diagram is a reproduction, not the original. We had a team of technical artists painstakingly copy the diagram from its original source, which was a slightly curved panel not unlike the one seen in the Big Basin craft, although this one was apparently inside the craft, not on the outside. We copied it into a drafting program over the course of about a month.

Our software was understandably primitive by today’s standards, but it was still orders of magnitude more powerful than a pencil and paper would have been. This made a task that would have otherwise been nearly impossible relatively feasible, albeit extremely time-consuming. I can assure you, “they” did not make anything particularly convenient for us. One of the reasons we chose to reproduce that particular diagram was because out of all the diagram-artifacts we had access to, it was on the flattest surface.

Since the geometry of the forms is extremely important, curvature of the surface it’s printed on must be “corrected” if it is to be reproduced in a surface with a different contour (such as a flat page). This can be done in a number of ways, by either using a mathematical model to reverse the effect of the surface curves on the diagram’s shapes, or by methods of physical measuring that allow precise measuring of irregular surfaces. In either case, however, it adds a significant new dimension of labor to an already extremely labor-intensive task, so it’s avoided whenever possible. We really just needed one or two accurately copied diagrams to serve as convenient examples for our own work in decoding and reproducing it, so luckily this was not something we had to do often. Some experimentation was being done on ways to “scan” the diagrams as well, using an almost completely automated process that could automatically account for curved surfaces, but during my time there, very little progress was made on this front.

3) I think the confusion over the quality of the documents stems from the fact that he (critic) is under the impression they (CARET document) were typeset. They were not. First of all, I’m no guru when it comes to graphics or design, but being in close contact with numerous people from places like XPARC will give you enough background to know the lay of the land. What’s first important to note is that systems capable of desktop publishing had been in development for many years before CARET, mostly starting with the Xerox Alto (in 1973), which XPARC developed themselves.

In fact, I once remember hearing from someone related to the original Alto team that Boeing (I believe) used the Alto to lay out and print the documentation for one of their planes (or something to that effect, I heard the story years ago). The joke was apparently that there was so MUCH documentation that the plane itself could essentially be filled with the pages. Furthermore, laser printing itself had also been around for many years (albeit in an extremely expensive form), and was also developed within XPARC (more or less). Other systems, such as PERQ and Lilith, also came out around the late 70′s and while none of them turned into major commercial products, they were not uncommon among large companies and [mostly] universities and were put to very productive use.

These systems were also the inspiration for the Apple Lisa and Macintosh, which was of course perhaps the biggest factor in the consumer-level desktop publishing boom of the late 80′s and early 90′s. By 1984, there were quite a few options available for producing these kinds of documents, they were just ABSURDLY expensive, so they weren’t on every street corner. Obviously it was nowhere near as turnkey and simple as it is today, but it was a very crude approximation of the same process with similar tools. We just had far less features and everything was a hell of a lot slower. But the point I’m trying to make is that while our method of documentation was somewhat advanced for its time, and also somewhat uncommon, it was hardly unattainable by a sufficiently motivated, financed, and well-connected organization.

I had very little contact with the technical writers for the most part, but I do know that we were using this kind of technology for both page layout and printing. CARET was expected to produce a massive amount of detailed, well-formatted documentation that could be easily modified and re-used for numerous drafts and revisions, and we would not have been able to keep up using traditional page layout and typesetting techniques. The mid-1980′s were a very transitional period for these fields, and I would suggest that people do not assume we were using run-of-the-mill standards.

One of the things I appreciated most about CARET was that if the technology was available, and we needed it to work better or more effectively, it was given to us with little debate. But typesetting and digital page layout are apples and oranges, so I think most of this is a moot point anyway.

The bottom line is that many people both inside and outside the engineering world frequently underestimate how long we’ve had a lot of the technology we have. 99% of the algorithms we use today were developed decades ago, they just didn’t have the same practical applications immediately available. Most of the engineers of the 60′s and 70′s would have been right at home with today’s developments and technologies. The only difference is that things have gotten smaller and faster. In the vast majority of technologies, that is the only thing that REALLY changes from one era to the next. If I told the average person that we had speech-synthesizing technology in 1936, they probably wouldn’t believe me.

I could show you a prototype of a simple drafting/design system that was operated by a light pen directly on a screen from the 1960′s. You could draw a shape freehand, then immediately rotate it, modify it, duplicate it, or whatever. You could draw lines connecting different objects, then erase them by simply drawing a squiggly line over it. The computer could interpret the squiggles as a sign to erase something, all in real time. And this was half a century ago, and decades before CARET. Think about that for a moment. The point is, most of what we have today is much older than we think. The only differences are that it’s faster, cheaper, and a marketing team has given it a glossy finish and found a commercial application for it. But if you take away some of the speed, power, ubiquity and consumer appeal, you’ll find a lot of today’s technology scattered throughout much of the 20th century. I hope this is helpful.

Isaac”

– –

From: Isaac Subject: Re: “Drones” Date: June 27, 2007

Isaac:

“1) While I wasn’t a major player in the (CARET) organization, I was hardly ‘some worker.’ My middle-management position is the only reason I was able to make out with what I did. Bear in mind that even someone in my position would never get the chance to leave with even the smallest of actual artifacts, but paperwork smuggling was feasible for anyone who wasn’t subjected to the frisking.

Also, let’s not forget that paperwork only proves so much. I’ll be the first to agree that everything I’ve provided could be faked, I suppose. It is, after all, just a series of images. While the powers that be obviously don’t want this material leaking if they can help it, they’re certainly aware that scans of documents aren’t in the same league as UFOs landing on the White House lawn. I’m not the first person to leak a document or a photo, and I won’t be the last. The information I’ve shared is very unlikely to change the world, and this is the reason I’m not worried about being literally murdered if I’m identified. I’ll face consequences to be sure, but it’s not the kind of thing they kill for.

2) Of course the manual doesn’t look anything like typical government and military documents. The entire purpose of CARET was to recreate the look and feel of silicon valley private enterprise, populate it with private industry engineers, and let it tackle the problem of extraterrestrial technology research. Style manuals were among the numerous things we brought with us from the ‘outside world.’ I’m not sure what else can be said about this. I agree it’s uncommon for non-standard documents to come out of this kind of research, but it’s even more uncommon for people like myself (and even more so for many of my co-workers) to be brought into this kind of project in the first place. Most of us were decidedly not military men. I find it a lot more bizarre than the fact that we were able to design our reports a certain way. CARET was an exception to many of the usual rules.

3) If he (one of many critics who emailed Earthfiles and which I shared with Isaac) believes the pictures are fake, I certainly can’t do or say anything to prove otherwise. He sounds very sure of himself.

4) Most importantly, be very wary of anyone who claims to ‘know the mind’ of extraterrestrials. The comments he’s made are, to put it lightly, naive and extremely presumptuous. Firstly, he’s referring to ‘the aliens’ as if there is a single collective group of them. The universe is not split into ‘humans’ and ‘non-humans,’ any more than Earth is split up into ‘Spanish’ and ‘non-Spanish’ or something equally arbitrary. There are numerous races – and again, like our own races of humans here on earth, they do things in very different ways.

His comment that ‘the aliens don’t do this or that’ is akin to saying ‘humans don’t speak Japanese.’ Well, many humans don’t, but Japanese humans certainly do. The point is not that his statement is right or wrong, but simply that it’s phrased illogically. He then goes on to suggest that the design of the drones is wasting space, which is again, alarming in its arrogance. We had some of the brightest minds in the world spending years just to understand a single facet of their technology, while this individual claims to be able to assess basically every detail of a given design after looking at a single photo and conclude that it’s inefficient. I’m not even sure such a statement should be dignified with a response, and I’m sure you can understand why.

To be honest, whoever this person is, I wrote him off as soon as he said ‘the aliens would never design as these pictures depict.’ That’s about as presumptuous (if not ignorant) as a statement on this subject can be, at least coming from a fellow human. Unless there’s an alien engineer on the other side of this email, there’s simply no way such statements could have merit. I’m really only writing this as a courtesy to you.

At best, he’s been exposed to technology from a radically different race, and at worst, he doesn’t know what he’s talking about. This individual may have access to real information, and he might not. If he is a fellow ‘whistle blower,’ then I’m not interested in attacking him. If he’s not, and is simply making things up, then I’m even less interested. Whatever he is or isn’t is not for me to say, but judging by the way he talks about this issue I have my doubts.

It’s a big world and these are complicated issues. A sense of humility and the admission we don’t know everything is one of our greatest assets.

Isaac”

====

 

Submitted for review and commentary by,

 

Concept Activity Research Vault ( CARV ), Host
E-MAIL: ConceptActivityResearchVault@Gmail.Com
WWW: http://ConceptActivityResearchVault.WordPress.Com

 

X-CIA Files Archives 2

[ NOTE: Legacy ( 1998 – 2003 ) X-CIA FILES website reports and images ( below ). ]

WARNING & DISCLAIMER: THIS IS NOT A GOVERNMENT WEBSITE

X-CIA Files Archive 2

” Details, Usually ‘Unavailable’ Elsewhere, Are Typically ‘Available’ Here! “

ExtraTerrestrial Technologies ( ETT )

Plasma Torch Rectenna Propulsion

3D Penrose Tiling Structures

Quasi-Crystal Materials Sciences

Lenticular Tactical Aerospace Vehicles ( LTAV )

Unmanned Combat Aerial Vehicle ( UCAV ) Linear Engines

Single-Stage To Orbit ( STO ) Vehicle Propulsion

Space-Time Continuum Manipulations

INTEL ( DOCILE ) Digital IC Orbit Communication Technologies

ExtraTerrestrial Biological Entities ( EBE )

Rare Unidentified Flying Objects ( UFO )

[ PHOTO ( above ): LOCKHEED SR-75 Penetrator legacy cut-away shows 4 PRATT & WHITNEY engines ( click on image to enlarge ) ]

[ PHOTO ( above ): LOCKHEED SR-75 Penetrator with ROCKETDYNE AeroSpike engines ( click on image to enlarge ) ]

[ PHOTO ( above ): ROCKETDYNE XRS-2200 AeroSpike thrust-vector engine testbed ( click on image to enlarge ) ]

[ PHOTO ( above ): ROCKETDYNE XRS-2200 AeroSpike diagramatic of thrust-vectoring engine technology ( click on image to enlarge ) ]

 [ PHOTO ( above ): F-117-E Stealth Reconnaissance Vehicle with dual ( 2 ) ROCKETDYNE XRS-2200 AeroSpike thrust vector engines installed ( click on image to enlarge ) ]

[ PHOTO ( above ): LOCKHEED LASRE SR-74 Scramp with hypersonic nuclear electric ion engine ( click on image to enlarge ) ]

Linear AeroSpike Engines for Unmanned Combat Aerial Vehicles ( UCAV )

ROCKETDYNE linear AeroSpike XRS-2200 ( RS-2200 ) engines utilize a design first developed in the early 1970s incorporating Apollo space mission era hardware from J-2S engines. Although this AeroSpike engine began almost 30-years ago it is of strategic importance today in the F-117-E Stealth reconnaissance aircraft it will be for future aerospace vehicles now under development.

21st Century propulsion technology was derived from a combination of 1960s era hardware developed from several decades of engine designs plus 1990s era design/analysis tools and fiscal realities that ushered in an entirely new era of commercial and military space flight where ‘old technology’ was found to be a primary key to developing even newer technological advancements.

The AeroSpike team located vendors, who more than 30-years ago, manufactured the original J-2S legacy engine hardware that AeroSpike based its turbo-machinery on.

Vendors, still in existence, were contracted to provide the Program with newly built J-2S hardware.

In cases where vendors had gone out-of business, new generation vendors were identified for producing new hardware from 30-year old legacy designs.

Panels, that make up the unique AeroSpike nozzle, presented a huge design challenge.

AeroSpike nozzles have a form like a large curved ramp – unlike traditional bell shaped nozzle characteristics for most rocket engines.

AeroSpike nozzle thermal and structural loads required development of new manufacturing processes and toolings to fabricate and assemble AeroSpike nozzle hardware.

The AeroSpike team found a way to accommodate huge thermal expansions and induced mechanical forces in mechanical and brazed joints of the assemblies where appropriate materials and attachment techniques were identified so effective manufacturing processes were developed.

In early 1997, a small half span model packaged equipped with an AeroSpike 8 thrust-celled nozzle engine equipped Lift-Body Vehicle [ SR-74 Scramp ( see further below ) ] – called the LASRE experiment – was piggy-back mounted onto a LOCKHEED SR-71 BlackBird high-altitude reconnaissance aircraft that was then tasked to operate like a ‘flying wind test tunnel’ to determine a Single-Stage-To-Orbit ( STO ) Reusable Launch Vehicle ( RLV ) onboard AeroSpike engine plume would affect aerodynamics of the Lifting Body Vehicle shape at specific altitudes and speeds initially reaching approximately 750-mph.

The interaction of the aerodynamic flow with the engine plume could create drag and design refinements minimized that interaction.

The lifting body model, eight [ 8 ] nozzled AeroSpike engine and, canoe collective was called the “pod.” The entire pod was 41-feet in length and weighed 14,300 pounds. The experimental pod, mounted onto an Air Force LOCKHEED SR-71 BlackBird stealth reconnaissance aircraft loaner, was completed in November 1998.

Successfully completion braze of the first flight ramp was accomplished as well as the fabrication and assemblies of the parts for the first thrusters.

With completion of those milestones the AeroSpike engine proceeded through fabrication, test and, delivery enabling support for the planned first flight in 1999.

Now, the AeroSpike XRS-2200 linear thrust-vectoring engine’s eventual “new placement’ was set for the sub-orbital technology air/space vehicle – referred to as the X-33 VentureStar – had a Rocketdyne team anticipating delivery of their first XRS-2200 AeroSpike flight engine by September of 1999.

Also, a “combined industry and government team” at LOCKHEED-MARTIN Skunk Works ( Palmdale, California ) was developing the X-33 for its AeroSpike XRS-2200 engine flight out of Edwards Air Force Base ( California ) scheduled for December of 1999.

The Linear Aerospike XRS-2200 ( RS-2200 ) engine was developed by the ROCKETDYNE PROPULSION AND POWER UNIT of the BOEING COMPANY indicating they completed the engine in early 2000 although the F-117A Stealth fighter was already secretly flying ‘long before’ their official public information release.

The difference between the linear AeroSpike engine and conventional rocket engines are the shape of the nozzle – unlike conventional rocket engines using a bell shaped nozzle to constrict expanding gases – the Aerospike nozzle is V-shaped and called a “ramp”.

The performance of Electro-Mechanical Actuators ( EMA ) are used in propellant valving for these engines. EMA is seen as a technology of choice in new rocket engines that would be developed under the Space Launch Initiative ( SLI ).

The XRS-2200 gas generator operated successfully in the flow-rate range of proposed X-33 operating conditions. The gas generator essentially was a J2 gas generator modified for the higher chamber pressure and flow-rate required for the XRS-2200.

The gas generator must be able to operate in conditions significantly higher than normal J2 operating conditions.

A review of the data showed the gas generator operated in these conditions but that also, the combustor shell wall temperatures were within acceptable tolerances. Post test inspections also found the hardware to be in good operating condition which showed signs of marked improvements from past hardware weakening. Engineers at Marshall Space Flight Center ( MFSC ) were able to demonstrate the gas generator could be started with a softer ramp, to minimize overpressure of the combustor shell, by accurately sequencing valve timings and ramps on the XRS-2200 AeroSpike engine.

Successful component tests followed a series of AeroSpike multi-cell engine tests at Marshall Space Flight Center that successfully demonstrated hydrogen-oxygen combustion at full power, emergency power, and low throttle conditions.

The pressure fed thrusters and AeroSpike nozzles were developed at the Rocketdyne Division of Boeing under a technology agreement with NASA and Lockheed-Martin who was set to build the VentureStar X-33 transport aerospace vehicle.

The XRS-2200 AeroSpike engine shoots hot gases, from multiple linear placed chamber nozzles along the outside of the ramp surface. This unusual design allows the engine to be more efficient and effective than today’s rocket engines by ‘modulating the thrust to various positioned sets of these nozzels acting in concert with vectoring – shaping direction of engine propulsion / thrust.

Hot test firings were performed with tests on the powerpack at the John C. Stennis Space Center, which included the turbo-machinery and gas generator that ran a program duration of 45-seconds with a start to the 80% power level transition to mainstage operation at 100% power and then throttled down to 57% power.

Test data indicated normal shutdown with no anomalies for the ROCKETDYNE AeroSpike XRS-2200 linear engine designed for use onboard the VentureStar X-33 – Reusable Launch Vehicle ( RLV ) – prior to delivery at the LOCKHEED-MARTIN VentureStar X-33 assembly facility ( Palmdale, California ) where the X-33 was to be flown from Edwards Air Force Base ( California ) into outerspace followed by a return touchdown at one ( 1 ) of two ( 2 ) landing sites ( i.e. Utah or Montana ).

Some VentureStar X-33 and XRS-2200 ROCKETDYNE engine project participants, were:

– Gene Austin, Program Manager for NASA VentureStar X-33 at Marshall Space Flight Center; – Cleon Lacefield, Vice-President LOCKHEED-MARTIN Space Systems ( Palmdale, California ) VentureStar X-33; – Don Chenevert, Program Manager, NASA X-33, Aerospike Engine Testing at Stennis Space Center, MS; – Mike McKeon, Program Manager X-33 Aerospike Engine, ROCKETDYNE Propulsion and Power Unit, BOEING ( Canoga Park, California ); and, – Steve Bouley, Division Director, Propulsion Development, ROCKETDYNE Propulsion & Power Unit, BOEING.

Instead of hydraulics, future propulsion systems may use EMAs to control major propellant valves so gaining performance data in ‘real world testing’ has significant value.

There are six ( 6 ) EMAs – on each AeroSpike test engine – used to deliver propellants to the thruster banks and gas generators. Two ( 2 ) engines will use forty ( 40 ) thrusters – 20 per XRS-2200 AeroSpike engine achieves aircraft velocities exceeding Mach 13 +.

A total of seven ( 7 ) variations of the Advanced Linear AeroSpike Engines – built by the ROCKETDYNE DIVISION of BOEING – were to power the X-33 VentureStar RLV to have been built by LOCKHEED-MARTIN.

There were three ( 3 ) additional powerpack assemblies and four ( 4 ) full-up AeroSpike XRS-2200 linear engines – including two ( 2 ) flight units that existed during the remainder of the development program.

ROCKETDYNE developed the XRS-2200 Aerospike linear engine at its Canoga Park, California facility for the later cancelled ( 2001 ) VentureStar X-33 Single-Stage To Orbit ( STO ) Reusable Launch Vehicle ( RLV ) space transport program. A joint BOEING and NASA team at Stennis Space Center did the final XRS-2200 AeroSpike engine assembly.

The RS-2200 Linear Aerospike Engine is being developed for use on the LOCKHEED-MARTIN Skunk Works Reusable Launch Vehicle ( RLV ).

The Aerospike linear engine allows the smallest lowest cost RLV ( Reusable Launch Vehicle ) to be developed because the engine fills the base ( reducing base drag ) and is integral to the vehicle – reducing installed weight when compared to a bell shaped conventional rocket engine.

The Aerospike is somewhat the same as bell shaped rocket engines, except for its nozzle open to the atmosphere. The open plume compensates for decreasing atmospheric pressure as the vehicle ascends – keeping engine performance very-high along the entire trajectory.

This altitude compensating feature allows a simple low-risk gas generator cycle to be used. Over $500,000,000 million has been invested to-date in AeroSpike engines, and full size linear engines have accumulated seventy=three [ 73 ] tests and over 4,000 seconds of operation.

Following the series of tests, XRS-2200 AeroSpike engines were removed from the test stand facility and put into storage at the Stennis Space Center – awaiting NASA instructions on engine final dispositions.

The precursor to the AURORA Transport-Lift Vehicle X-43 placed three ( 3 ) such X-43A aerospace vehicles inside the Dryden Space Flight facility at Edwards Air Force Base, California where a 12-foot-long under wing test vehicle existed for the NASA “Hyper-X” multi-year hypersonic research program [ AURORA ] to demonstrate “airframe integrated and air breathing ( AeroSpike ) engine technologies” that promise to increase payload capacity for future vehicles by consuming ambient oxygen at altitudes higher than previously possible.

This will remove the need for carrying oxygen tanks onboard to promote combustion, as traditional rockets must do now.

Two flights are planned at Mach 7 ( approximately 5,000 mph ) and one ( 1 ) flight at Mach 10 ( almost 7,200 mph ) to a top speed of Mach 13 +.

By comparison, the world’s fastest “air-breathing plane” – to date – was the LOCKHEED SR-71 Blackbird that could fly at an ‘unclassified airspeed’ of Mach 3 + to an extimated top speed of Mach 7.

Future generations of EMAs will be even more compact – than those currently in operation – that will pave the way for linear AeroSpike acceleration thrust-vectoring engines to be deployed onboard ‘newly designed’ Unmanned Combat Air Vehicles ( UAV ).

Some X-33 and ROCKETDYNE XRS-2200 AeroSpike engine project participants, were:

– Gene Austin, NASA X-33 Program Manager, Marshall Space Flight Center; – Cleon Lacefield, Lockheed-Martin Space Systems Company Vice President for X-33, Palmdale, CA; – Don Chenevert, NASA X-33 Program Manager, Aerospike Engine Testing, Stennis Space Center, MS; – Mike McKeon, X-33 Aerospike Engine Program Manager, Rocketdyne Propulsion and Power, The Boeing Company, Canoga Park, CA; and, – Steve Bouley, Division Director, Propulsion Development, Rocketdyne Propulsion & Power Unit, the Boeing Company.

August 8, 2001 – The NASA Second Generation Reusable Launch Vehicle Program – also known as the Space Launch Initiative ( SLI ) – is making advances in propulsion technology with this third and final successful engine hot-fire designed to test electro-mechanical actuators. Information learned from this hot-fire test series about new electro-mechanical actuator technology – which controls the flow of propellants in rocket engines – could provide key advancements for the propulsion systems of future spacecraft. The test of twin ( 2 ) Linear Aerospike XRS-2200 engines originally built for the X-33 program, was performed Monday, August 6, 2001 at the NASA Stennis Space Center, Mississippi where the engines were fired for the planned 90-seconds and reached a planned maximum power of 85%. The test was originally slated to attain full power during 100-seconds of testing. Prior to the test, engineers determined the necessary results could be achieved at reduced duration and power. Based on this determination, both planned duration and planned power were reduced. Two [ 2 ] shorter hot-fires of the AeroSpike engines were performed last month [ July 2001 ] in preparation for the final test firing on August 6, 2001.

The Second Generation Reusable Launch Vehicle ( RLV ) Program, led by the NASA Marshall Space Flight Center in Huntsville, Alabama is a technology development program designed to increase safety and reliability while reducing costs for space travel.

“Because every engine proposed by industry for a second generation vehicle has electro-mechanical actuators, we took advantage of these AeroSpike engines already on the test stand to explore this relatively new technology now – saving us valuable time later,” said Garry Lyles, Propulsion Projects Office manager of the Second Generation Reusable Launch Vehicle Program at the Marshall Center. “This data is critical toward developing the confidence required to support the use of these actuators on future launch vehicles.”

Electro-mechanical actuators electronically regulate the amount of propellant (fuel and oxidizer) flow in the engine. The new technology is a potential alternative and improvement to the older pneumatic and hydraulic fluid systems currently used by the aerospace industry to drive and control critical rocket engine valves.

“This series of engine firings tested the actuator control system in what we call a ‘real condition of use’ environment,” said Dr. Donald Chenevert, electro-mechanical actuator project manager at the Stennis Center. “Firing allows us to see how the integrated system handles the extreme cold of cryogenic propellants, the stress loads of the propellants pushing through the valves, and the dynamic response to commanded flow rate changes. Additionally, we have many other unique conditions such as shock and vibration loads not found in a lab, so we capture more realistic data about the true performance of the actuators.” Engineers are performing engine post-test inspections, and early indications are that all test objectives have been met, Chenevert said.

The final data is to be fed directly into the engine systems being considered for a second-generation reusable launch vehicle, Lyles said. “Propulsion is one of the highest and most critical technology areas that we are exploring,” said Dennis Smith, manager of the Second Generation Reusable Launch Vehicle Program Office at the Marshall Center. “Our goal also is to find, improve or develop technologies such as airframes, avionics, health management systems and ground operations – all to make getting people and payloads into space safer and cheaper.”

The Rocketdyne Propulsion and Power Unit of The Boeing Company in Canoga Park, Calif., developed the AeroSpike engine and supported the engine tests at Stennis Space Center.

– –

[ photo ( above ) none, like the large UFO below, exists to-date ]

USA, California, Hesperia – 1997 – Local area residents of the Victor Valley southern California area formed a collective having focused new attentions toward the sky where a rash of ‘satellite failures’, unidentified flying objects ( UFO ) including one ( 1 ) that stopped 2-way traffic for 10-minutes at night along a popular California interstate where a very large triangle craft hovered while actually blocking out starlight, and – further south – a few extraterrestrial biological entity ( EBE ) sightings.

Local newspapers only reported that local area residents, curious from too many unexplained sightings, were holding an open to the public Town Hall Meeting ( Main Street in Hesperia, California ) to discuss and compare what they were encountering.

The town hall meeting, held in a small retail center with a fast food store, unfortunately did not discuss UFo sightings because a woman – operating an overhead transparency slide projector – placed images of ‘foreign’ ( Mexico ) area Chupacabra sightings for discussion. Frustrated by the obviously lengthy Chupacabra distraction, not taking local area UFO questions, most residents in-attendance abandoned the town hall office meeting to stretch their legs outside where some began talking amongst themselves about ‘UFO and alien local encounters’ they thought the town hall meeting was supposed to be allowing for open discussions.

Noticing town hall meeting attendees pouring outside, freelance reporter Paul Collin interviewed the disenfranchised residents whom left their off-topic town hall meeting inside. Eyewitnesses, came with family members, some providing additional eyewitness accounts. Providing one-on-one interviews for only first-hand reports describing details, residents were also allowed to personally sketch drawings of personal UFO and alien entity encounters.

A good investigative journalist may play unknowledgeable while subtly and quite effectively being able to quickly assess normal human frailties from purposeful deceit in getting to the bottom of the truth. Easy to spot, are armchair storytellers ( with plenty of time on their hands who invariably stray from the topic to talk about what they did or do for a living ), narcissists ( rambling on about themselves while exhibiting rather odd personal quirks ), weird-os and opportunists ( some wearing partial Star Trek or Wonder Woman costumes, alien face masks, spring-wired tinfoil antenna balls sprouting from headbands, or constantly looking in their compact mirrors to see if their make-up is still on their face correct ), and then move-on to interview others whose purpose stems from serious concerns as a resident member of the community.

Even then, trying to detrmine fact from fictionalized accountings is not an easy task. You look deep into these people’s faces as they convey their stories. “Did they really see what they’re claiming?” Look at their faces, closer, any micromomentary facial expressions? Also look carefully at their eyes and the direction they quickly snap ‘just before beginning to answer your question’. Look carefully at their reactions after throwing their own statement back at them, but with a purposeful small inaccuracy, to see whether they correct it, become exacerbated by your having just twisted what they just conveyed, or continue as though that’s what they said – but actually didn’t. Can they provide details as to what they were doing ‘just before the time of their encounter’? Do they appear to be easily disturbed emotionally or do they offer light-hearted concerns while discussing their more serious concerns on-topic?

Most were rather ‘original’, several did not match what was mostly being reported, some interviewees were very apolegetic for not having more than just a little to report. The culmination of many reports served to quickly narrow the scope of ‘believeable encounters’ from those ‘otherwise’.

Analysis of all boiled down to the following six ( 6 ) essential facts:

1. High-volume ( PUBLIC ) sightings;

2. Short-term duration ( 30-DAY ) reportings;

3. Small region ( HIGH DESERT ) locations;

4. Near ground Low Earth Orbit ( LEO ) altitudes.

5. Limited design ( UFO ) triangles; and,

6. Incident ( MAJOR ) highways.

Over all reports, only four ( 4 ) really stood-out:

A. There was the Hesperia, California family in their minivan – homebound east on Main Street ( Hesperia, California ) with a clear sunset behind them having just left a soccer game when all occupants began to comment about what appeared outside their windshield in the low horizon distance where a slimline triangle shaped UFO just lingered ( for about 5-minutes ) but then suddenly ( in seconds ) snapped its location due south and shot upward where all of a sudden – and in’mid-sky’ – just blinked-out before even reaching the upper darkening sky. The triangle UFO exhibited ‘no contrails’, ‘no sound barrier boom’, nothing. Just a brief low earth hover, quick snap south and then up out of sight in the blink of an eye. While the kids were all excited, the parents tried calming them down – along with their own unsettled nerves – explaining it all away as only being some new Air Force jet. Deep inside, the parents knew it was ‘not any aircraft’, but only one ( 1 ) of other unexplained sightings plaguing yet other residents over the past month;

B. All alone, a middle-aged man traveling west on Main Street ( Hesperia, California ) homebound for Phelan, California spotted in the southwest sky over the Wrightwood mountains a large triangle craft slowly moving upward. Stopping at California state highway 395 traffic light. He looked back up out his windshield and saw nothing there anymore, but it gave him something to tell his wife when he arrived home. The wife, rolling her eyes, put dinner on the table, but interestingly was also present by his side at the town hall meeting as well. Residents wanted to know what was going on in their local community, especially after local UFO sightings appeared to begin registering in their local paper;

C. Further southwest and beyond the Wrightwood mountains – in Azusa, California – a grandmother and her live-in daughter nurse both witnessed – on two ( 2 ) separate occasions while driving home slowly down their semi-rural neighborhood street at night – two ( 2 ) glowing red eyes in the head of what appeared to them to be a small 2-legged ape-like creature hunched down by the side of their road where although well ahead of their vehicle the creature suddenly darted across the street but with what they both claimed was at a ‘frightening blur’ of a pace. The women also spotted what they believed was the same 2-legged ape-like creature with red eyes three ( 3 ) additional times but inside the furthest corner of their backyard where it seemed to be glaring at them both through the rear kitchen window. Immediately scared to death, both residents – whom by the time they thought to call police – then witnessed the creature pop-up – rather unusually – bounding over and outside their backyard fence. I had to ask if they remained in the town hall meeting for the Chupacabra discussion, and they glanced at each other and let me know that what they saw was ‘not’ a Chupacabra. I asked, “Could it have been a baby Chupacabra? They looked at each other and then back at me, shaking their heads in the negative. Their ‘thing’ was ‘not hairy’, did ‘not have head horn spikes’, was ‘not a color shade of grey, blue, or eggshell’. It was ‘black’, ‘short’, and when it moved – it moved ‘extremely fast’ with an ‘odd blur’ you couldn’t focus-in on. I thought to myself, “Probably darn hard to target fire onto;” and,

D. What brought the freelance reporter to the meeting in the first place was his own personal encounter in the same general area of the High Desert of southern California where 1-week earlier at night while 20-minutes northeast of Victorville, California in the middle of the desert on California Interstate 15 ( I-15 ) on his way to Las Vegas, Nevada he noticed traffic on ‘both sides of the that highway pulling over and stopping. He figured a serious accident occured and pulled over to exit his vehicle to look out into the desert along the highway, but didn’t see any vehicles there. He walked back a couple of cars and noticed a group of people talking together and asked where the accident was. He was told to look up just a little off to the east of the intersate highway to see what was stopping traffic, and there ‘it’ was – an incredibly huge black triangle shaped object just hovering without any lights on. The oddest thing about it was that it was so huge that a whole section of the night sky had no starlight while all around the flying object anyone ‘could easily see starlight all around’ but no starlight directly above the behemoth. I asked the group what the thing was doing and what had it been doing. They said they didn’t know what it was doing now, but that it had been exhibiting a low hum, which stopped, and it was just continuing to linger where it had been for what they estimated had been 15-minutes. I called the California Highway Patrol office and they said they were already responding to it. I heard no emergency sirens and saw no red lights. I waited another 15-minutes and nothing happened. It just lingered a few hundred feet above ground off in the desert. Not being too much braver I decided to get back into my car and turn around and go back home.

The following week, I located one ( 1 ) particular Blockbuster video store on Bear Valley Road in southwest Victorville, California. That particular store carried an unusually large selection of UFO documentary videos placed in a special section. I decidely watched over 100 of those videos to determine if anyone else might have seen any huge triangle UFOs. At the time ( 1997 ) there were unfortunately ‘no flying triangle videos’ I could lay my hands on.

Apparently my frequent selections attracted the attention of the store owner, John Pflughoft of MPM INVEST, who eventually approached me and politel asked why I was interested in watching so many UFO videos. I think he knew something had startled me into that habit so, I conveyed what I had seen the previous week.

I also shared my late night experiences during 1972 while assigned to the Intelligence Section Station at George Air Force Base and later Edwards Air Force Base in the High Desert. I told him about strange red, orange, and yellow ‘firelight’ coming out the tops of some of the smaller mountains scattered between George AFB and Edwards AFB out in the middle of the desert. The video proprietor asked if I knew what the ‘firelights’ were. I told him I figured it was just rocket engine testing going on inside some of those small mountains.

He asked if I had ever seen any UFOs before last week. All I had to convey was an experience in 1976 while camping with a couple of my military buddies of mine up in the Iron Mountain range between Randsburg, California and Mojave, California at night, and while we ‘saw nothing’ all three ( 3 ) of us ‘heard’ a very unusual ‘electronic whirring sound’ that seemed to be travelling up and down both sides of the foothills a few hundred feet from where we were trying to sleep. I told him we walked in the direction of where we heard the whirring sound coming from last but saw nothing. Then when we returned to our camp where 30-minutes later we all heard it start back up again so, we left in the middle of the night and drove 90-miles to get home. He smiled and said, “Well, I guess that until last week you’ve been pretty lucky to have remained out-of the UFO experience.”

He then asked if an upcoming town hall meeting in Hesperia, California where residents were going to discuss their own personal UFO and extraterrestrial encounters during the recent month might interest me. I knew nothing of any other sightings so, he suggested I attend and asked if I would report back to him what I learned. I agreed, attended the meeting, but when it began being abandonded, dug-out a yellow legal pad of paper and began interviewing attendees upon exit. A final report was  prepared, and along with resident sketches, placed in a manila envelope sealed-up and dropped-off at the Blockbuster store for his later review.

– –

[ PHOTO ( above ): Circa 12OCT62 – Ames Langley Research Center lenticular vehicle aero-space body designs ( click on image to enlarge ) ]

As far back as October 12, 1962 Ames Langley and Dryden Flight Research Centers began feasibility studies and designs for developing a lenticular design space re-entry air vehicle with speeds capable of reaching Mach 25 + to Mach 50 +.

[ photo ( above ) TR-3B Astra – Flying Triangle ( click to enlarge ) ]

In 1995, at Nellis Air Force Base Test Range S-4 ( near Papoose Lake, Nevada ) the TR3-B ( a lenticular-shaped aerial vehicle ) was seen and reported to be between 300-feet and 500-feet in diameter.

Reportedly, the TR3B flies at speeds of Mach 15 + and reflects a bright blue grey color believed to be biological electro-chromatic 3-D Penrose tiling polymer material providing highly advanced stealth qualities.

TR3B is also believed to have carried the INTEL company Direct Orbital Communication & Intelligence Link Electronics ( DOCILE ) computer processor unit ( CPU ) system.

TR3B is believed to have derived partial funding from Strategic Defense Initiative ( SDI – Star Wars ) links with the super secret AURORA Program Office global security defense operations mission.

TR3-B is believed using a quasi-crystalline molecular property energy containment storage core driving a plasma-fluidic propulsion system employing combinatoric development of its Magnetic Field Disruptor ( MFD ) quantum-flux transduction field generator technology.

TR3B reportedly emits cyclotron radiation, performs pulse detonation acceleration, and carries EPR quantum receivers.

TR3B craft reportedly resembles a ‘very large triangle’ ( shaped ) air vehicle.

TR3B (aka) TR3-B (aka) TIER III B craft in no way resembled the TR-3/A MANTA air vehicle.

The flight-testing of all experimental and first-model military aircraft occurred here along an ancient dry lake now called Rogers Dry Lake, located on the western edge of Southern California’s Mojave desert – south of Highway 58 between the two ( 2 ) towns of Mojave, California and Boron, California ( where the World’s largest open-pit borax mine is ) in Rogers Dry Lake ( one of the first immigrant trails through California ).

The first permanent settlers, to a certain desert region area, was the Corum Family who located near this large dry lake area ( in 1910 ) where later local residents tried to get the local U.S. Post Office to name it “Corum, California” however another city with a similar name “Coram, California” existed so, the name “Corum” was reverse spelled it as “Muroc,” which is where this Mojave desert area saw the U.S. Army Air Corps later name “Muroc Field” and the subsequent naming of the NASA Muroc Flight Test Unit ( MFTU ) in this California area.

This dry lake was extremely ideal as what would become the major site of aviation flight-test history because, at about 2,300-feet above sea level, Rogers Dry Lake not only happens to fill an area of about 44-square miles (nearly half again as large as New York’s Manhattan Island) making it one of the largest and best natural ( flat and hard surfaced ) landing sites on Earth. The arid desert weather also promotes excellent flying conditions on almost every day of the year ( about 320-days out of the year ).

Rogers Dry Lake is the sediment filled remnant of an ancient lake formed eons ago. Several inches of water can accumulate on the lakebed when it rains, and the water in combination with the desert winds creates a natural smoothing and leveling action across the surface. When the water evaporates in the desert sun, a smooth and level surface appears across the lakebed, one far superior to that made by humans.

[ photo ( above ) DOUGLAS AIRCRAFT Black Horse Project Manta ( click to enlarge ) ]

AURORA Program Office consisted of lenticular shaped and wingless aerospace vehicle Projects that industry sleuths speculate secretly held a billion dollar high-speed high-altitude surveillance air space vehicle that leaves a ‘contrail’ behind it resembling ‘doughnut clouds on a string’.

According to some reports, AURORA Program aerospace vehicles are capable of high-speed maneuverability allowing abrupt course change corrections within their own flight path.

Information ( below ) are excerpts from two ( 2 ) individuals, Robert “Bob” Lazar and Edgar Fouche, during different time periods at different locations. These relevant excerpts should serve to familiarize readers and provide interesting relationships between similarities of ETT ( ExtraTerrestrial Technologies ), reverse-engineering ( backward engineering ) of U.S. government military seized extraterrestrial spacecraft, and current day advanced technologies controlled by the U.S. Department of Defense ( DOD ), Defense Advanced Research Projects Agency ( DARPA ) programs and projects worldwide.

Obvious template similarities seem to have been successfully performed in order to produce fully operational high-performance defense and observation flightcrafts for exclusively U.S. government use, examples of which may be viewed in the section “ NEWS ALERTS! “ on this website.

[ photo circa: 1995 ( above ) Area 51, Groom Lake Nevada ( click to enlarge ) ]

While Bob Lazar ( below ) provides his interviews mentioning eyewitness accounts coupled with basically seamless theories covering time and space folding with alien spacecraft ET technology interalia ETT [ ExtraTerrestrial Technologies ] given his previous work at the Nellis Air Force Base, Nevada Test Range Site S-4 Area 51 ( near Groom Lake, Nevada ), Edgar Fouche provides another arena of detailed information coinciding with some areas with what Bob Lazar saw in his own experiences near ExtraTerrestrial technology ( ETT ). Edgar Fouche depicts how ETT was converted into operational use flying craft for United States government arenas.

Skeptics may no longer speculate on what current aerospace lenticular crafts the U.S. has developed and just what is planned for the not too distant future where most of these highly classified Programs and Projects will remain cloaked for some time to come.

[ NOTE: For a look at current lenticular crafts, developed by the U.S., search this website for photos and details on UAV, UCAV, MCAV, MAV and High-Energy Weapons ( HEW ) and Directed Energy Weapon ( DEW ) research and development. ]

Culminations of technology data and relevant associated theories have never before been produced into one ( 1 ) reading area until now ( here ) where at first glance the following data may seem too fictionalized due in large part to its unfamiliarity to most, however these technologies are very much a large part of reality in what research provides of which only a very small percentage is presented in multiple interviews ( below ):

Interview Excerpts of Bob Lazar interview ( 09DEC89 ) on KLAS TV ( Las Vegas, Nevada ), below:

Producer / Host: George Knapp Lazar: The first thing was hands-on experience with the anti-matter reactor. Knapp: Explain what that is, how it works, and what it does. Lazar: It’s a plate about 18-inches in diameter with a sphere on top. Knapp: We have a tape of a model that a friend of yours made. You can narrate along. There it is… Lazar: Inside that tower is a chip of Element 115 they just put in there. That’s a super-heavy element. The lid goes on top. And as far as any other of the workings of it, I really don’t know, you know, [ such as ] what’s inside the bottom of it ( i.e. Element 115 ), sets up a gravitational field around the top. That little waveguide, you saw being put on the top, it essentially siphons off the gravitywave – and that’s later amplified in the lower portion of the craft. But, just in general, the whole technology is virtually unknown. Knapp: Now we saw the model. We saw the pictures of it there. It looks really, really simple, almost too simple to actually do anything. Lazar: Right. Knapp: Working parts? Lazar: None detectable. Essentially what the job was, to back-engineer [ reverse engineer ] everything, where you have a finished product and to step backwards and find out how it was made or how it could be made with earthly materials. There hasn’t been very much progress. Knapp: How long do you think they’ve had this technology up there? Lazar: It seems like quite a while, but I really don’t know. Knapp: What could you do with an anti-matter generator? What does it do? Lazar: It converts anti-matter . . . It DOESN’T convert anti-matter! There’s an annihilation reaction. It’s an extremely powerful reaction, a 100% conversion of matter to energy, unlike a fission or fusion reaction which is somewhere around eight-tenths of one percent conversion of matter to energy. Knapp: How does it work? What starts the reaction going? Lazar: Really, once the 115 [ Element 115 ] is put in, the reaction is initiated. Knapp: Automatic. Lazar: Right. Knapp: I don’t understand. I mean, there’s no button to push or anything? Lazar: No, there’s no button to push or anything. Apparently, the 115 under bombardment with protons lets out an anti-matter particle. This anti-matter particle will react with any matter whatsoever, which I imagine there is some target system inside the reactor. This, in turn, releases heat, and somewhere within that system there is a one-hundred-percent-efficient thermionic generator, essentially a heat-to-electrical generator. Knapp: How is this anti-matter reactor connected to gravity generation that you were talking about earlier? Lazar: Well, that reactor serves two purposes; it provides a tremendous amount of electrical power, which is almost a by-product. The gravitational wave gets formed at the sphere, and that’s through some action of the 115, and the exact action I don’t think anyone really knows. The wave guide siphons off that gravity wave, and that’s channeled above the top of the disk to the lower part where there are three gravity amplifiers, which amplify and direct that gravity wave. Knapp: In essence creating their own gravitational field. Lazar: Their own gravitational field. Knapp: You’re fairly convinced that science on earth doesn’t have this technology right now? We have it now at S-4, I guess, but we didn’t create it? Lazar: Right. Knapp: Why not? Why couldn’t we? Lazar: The technology’s not even — We don’t even know what gravity IS! Knapp: Well, what is it? What have you learned about what gravity is? Lazar: Gravity is a wave. There are many different theories, wave included. It’s been theorized that gravity is also particles, gravitons, which is also incorrect. But gravity is a wave. The basic wave they can actually tap off of an element: why that is I’m not exactly sure. Knapp: So you can produce your own gravity. What does that mean? What does that allow you to do? Lazar: It allows you to do virtually anything. Gravity distorts time and space. By doing that, now you’re into a different mode of travel, where instead of traveling in a linear method — going from Point A to B — now you can distort time and space to where you essentially bring the mountain to Mohammed; you almost bring your destination to you without moving. And since you’re distorting time, all this takes place in between moments of time. It’s such a far-fetched concept! Knapp: Of course, what the UFO skeptics say is, yeah, there’s life out there elsewhere in the universe; it can never come here; it’s just too darn far. With the kind of technology you’re talking about, it makes such considerations irrelevant about distance and time and things like that. Lazar: Exactly, because when you are distorting time, there’s no longer a normal reference of time. And that’s what producing your own gravity does. Knapp: You can go forward or backward in time? Is that’s what you’re saying? Lazar: No not essentially. It would be easier with a model. On the bottom side of the disk are the three gravity generators. When they want to travel to a distant point, the disk turns on its side. The three gravity generators produce a gravitational beam. What they do is they converge the three gravity generators onto a point and use that as a focal point; and they bring them up to power and PULL that point towards the disk. The disk itself will attach ONTO that point and snap back — AS THEY RELEASE SPACE BACK TO THAT POINT! Now all this happens in the distortion of time, so time is not incrementing. So the SPEED is essentially infinite. Knapp: We’ll get into the disks in a moment. But the first time you saw the anti-matter reactor in operation or a demonstration — you had a couple of demonstrations — tell me about that. Lazar: The first time I saw it in operation, we just put — a friend I worked with, Barry — put the fuel in the reactor, put the lid on as, as was shown there. Immediately, a gravitational field developed, and he said, “Feel it!” And it felt like you bring two like poles of a magnet together; you can do that with your hand. And it was FASCINATING to do that, impossible, except on something with great mass! And obviously this is just a . . . And it was a REPULSION field. In fact, we kind of fooled around with it for a little while. And we threw golf balls off it. And it was just a really unique thing. Knapp: And you had other demonstrations to show you that this is pretty wild stuff, right? Lazar: Yeah, they did. They were able to channel the field off in a demonstration that they created an INTENSE gravitational area. And you began to see a small little black disk form, and that was the bending of the light. Knapp: Just like a black hole floating around? Lazar: Yeah, well, a black hole is a bad analogy, but yeah, essentially.

Interview ( MAR – APR 1990 ) Excerpts of Bob Lazar at KLAS TV ( Las Vegas, NV ), below:

Lazar: The craft does not create an “antigravity” field, as some have surmised. “It’s a gravitational field that’s out of phase with the current one,” Lazar explained in a 1989 radio interview. “It’s the same gravitational wave. The phases vary from 180 degrees to zero … in a longitudinal propagation.” Assuming they’re in space, they will focus the three [ 3 ] gravity generators on the point they want to go to. Now, to give an analogy: If you take a thin rubber sheet, say, lay it on a table and put thumbtacks in each corner, then take a big stone and set it on one end of the rubber sheet and say that’s your spacecraft, you pick out a point that you want to go to -which could be anywhere on the rubber sheet – pinch that point with your fingers and pull it all the way up to the craft. That’s how it focuses and pulls that point to it. When you then shut off the gravity generator[s], the stone (or spacecraft) follows that stretched rubber back to its point. There’s no linear travel through space; it actually bends space and time and follows space as it retracts. In the first mode of travel – around the surface of a planet – they essentially balance on the gravitational field that the generators put out, and they ride a “wave”, like a cork does in the ocean. In that mode they’re very unstable and are affected by the weather. In the other mode of travel – where they can travel vast distances – they can’t really do that in a strong gravitational field like Earth, because to do that, first of all, they need to tilt on their side, usually out in space, then they can focus on the point they need to with the gravity generators and move on. If you can picture space as a fabric, and the speed of light is your limit, it’ll take you so long, even at the speed of light, to get from point A to point B. You can’t exceed it – not in this universe anyway. Should there be other parallel universes, maybe the laws are different, but anyone that’s here has to abide by those rules. The fact is that gravity distorts time and space. Imagining that you’re in a spacecraft that can exert a tremendous gravitational field by itself, you could sit in any particular place, turn on the gravity generator, and actually warp space and time and “fold” it. By shutting that off, you’d click back and you’d be at a tremendous distance from where you were, but time wouldn’t have even moved, because you essentially shut it off. It’ s so farfetched. It’s difficult for people to grasp, and as stubborn as the scientific community is, they’ll never buy it that this is in fact what happens.

According to Lazar, the propulsion system he worked on at S-4 gives rise to certain peculiar effects, including INVISIBILITY of the craft: “You can be looking straight up at it, and if the gravity generators are in the proper configuration you’d just see the sky above it – you won’t see the craft there. That’s how there can be a group of people and only some people can be right under it and see it. It just depends how the field is bent. It’s also the reason why the crafts appear as if they’re making 90- degree turns at some incredible speed; it’s just the time and space distortion that you’re seeing. You’re not seeing the actual event happening.” If the crafts look like they’re flying at seven thousand miles per hour and they make a right-angled turn, it’s not necessarily what they’re doing. They can ‘appear’ that way because of the gravitational distortion. I guess a good analogy is that you’re always looking at a mirage – [ it’s only when ] the craft is shut off and sitting on the ground, ‘that is’ what it ‘looks like’. Otherwise, you’re just looking at a tremendously distorted thing, and it will appear like it is changing shape, stopping or going, and it could be flying almost like an airplane, but it would never look that way to you. Knapp: How close do you think you have to get before time distortion takes place? Lazar: It’s tough to say, because it depends on the configuration of the craft. If the craft is hovering in the air, and the gravity amplifiers are focused down to the ground and it’s standing on its gravity wave, you would have to get into that focused area. If you’re directly underneath the craft at any time there’s a tremendous time distortion, and that’s in proportion to the proximity of the craft. Lazar: I don’t know if I mentioned it before, but the amplifiers always run at 100%. They are always outputting a maximum gravity wave, and that wave is phase-shifted from zero to 180 degrees. That’s essentially the attraction and repulsion, and it’s normally at a null setting somewhere in between. It’s a very straightforward system. It looks more like a coal-fired engine than very hi-tech.

Interview ( JUN – JUL 1990 ) Excerpts of Bob Lazar at KLAS TV ( Las Vegas, NV ), below:

Lazar: …And there are two specific different types of Gravity: Gravity A and Gravity B. Gravity A works on a smaller, micro scale while Gravity B works on a larger, macro scale. We are familiar with Gravity B. It is the big gravity wave that holds the Earth, as well as the rest of the planets, in orbit around the Sun and holds the moon, as well as man-made satellites, in orbit around the Earth. We are not familiar with Gravity A. It is the small gravity wave, which is the major contributory force that holds together the mass that makes up all protons and neutrons. Gravity A is what is currently being labeled as the Strong Nuclear Force in mainstream physics, and Gravity A is the wave that you need to access and amplify to enable you to cause space-time distortion for interstellar travel. To keep them straight, just remember that Gravity A works on an atomic scale, and Gravity B is the big gravity wave that works on a stellar or planetary level. However, don’t mistake the size of these waves for their strength, because Gravity A is a much stronger force than Gravity B. You can momentarily break the Gravity B field of the Earth simply by jumping in the air, so this is not an intense gravitational field. Locating Gravity A is no problem because it is found in the nucleus of every atom of all matter here on Earth, and all matter everywhere else in our universe. However accessing Gravity A with the naturally occurring elements found on Earth is a big problem. Actually, I’m not aware of any way of accessing the Gravity A wave using any Earth element, whether naturally occurring or synthesized, and here’s why. We’ve already learned that Gravity A is the major force that holds together the mass that makes up protons and neutrons. This means the Gravity A wave we are trying to access is virtually inaccessible as it is located within matter, or at least the matter we have here on Earth. The most important attribute of these heavier stable elements is that the Gravity A wave is so abundant that it actually extends past the perimeter of the atom. These heavier, stable elements literally have their own Gravity A field around them in addition to the Gravity B field that is native to all elements. No naturally occurring atoms on Earth have enough protons and neutrons for the cumulative Gravity A wave to extend past the perimeter of the atom so you can access it. Even though the distance the Gravity A wave extends is infinitesimal, it IS accessible and has amplitude, wavelength and frequency just like any other wave in the electromagnetic spectrum. Once you can access the Gravity A wave, you can amplify it just like we amplify any other electromagnetic wave. So, back to our power source. Inside the reactor, element 115 is bombarded with a proton that plugs into the nucleus of the 115 atom and becomes element 116 which immediately decays and releases or radiates small amounts of antimatter. The antimatter is released in a vacuum into a tuned tube that keeps it from reacting with the matter that surrounds it. It is then directed toward the gaseous matter target at the end of the tube. The matter and antimatter collide and annihilate, totally converting to energy. The heat from this reaction is converted into electrical energy in a near 100% efficient thermoelectric generator. This is a device that converts heat directly into electrical energy. Many of our satellites and space probes use thermoelectric generators, but their efficiency is very, very low. All of these actions and reactions inside of the reactor are orchestrated perfectly like a tiny little ballet, and in this manner the reactor provides an enormous amount of power. So, back to our original question: What is the power source that provides the power required for this type of travel? The power source is a reactor that uses element 115 as a fuel, and uses a total annihilation reaction to provide the heat which it converts to energy, making it a compact, lightweight, efficient, onboard power source. I’ve got a couple of quick comments, on Element 115, for those of you that are interested. By virtue of the way it’s used – in the reactor – it depletes very slowly, and only 223 grams ( just under ½ pound ) of Element 115 can be utilized for a period of 20 to 30-years. Element 115 melting point is 1740 C. I need to state here that even though I had hands-on experience with Element 115, I didn’t melt any of it down and I didn’t use any of it for twenty to thirty years to see if it depleted. Now when a disk travels near another source of gravity, such as a planet or moon, it doesn’t use the same mode of travel that we learned about in our science lesson. When a disk is near another source of gravity, like Earth, the Gravity A wave, which propagates outward from the disk, is phase-shifted into the Gravity B wave propagating outward ( from the Earth ), creates lift. The gravity amplifiers ( of the disk ), can be focused independently, and they are pulsed and do not remain ‘on’ continuously. When all three [ 3 ] of these amplifiers are being used for travel, they are in the delta wave configuration, and when only one [ 1 ] is being used, for travel, it is in the omicron wave configuration. As the intensity of the gravitational field around the disk increases, the distortion of space-time around the disk also increases. And if you could see the space-time distortion, this is how it would look [ Bob Lazar draws a side-view picture of saucer hovering above ground, with field surrounding it and running straight down to the ground. Picture a disk on the end of a pole, then throw a sheet over it. ] As you can see, as the output of the gravitational amplifiers becomes more intense, the form of space-time around the disk not only bends upward but – at maximum distortion – actually folds over into almost a ‘heart shape design around the top’ of the disk. Now remember, this space-time distortion is taking place 360 degrees around the disk, so if you were looking at the disk from the top, the space-time distortion would be in the shape of a doughnut. When the gravitational field around the disk is so intense, that the space-time distortion around the disk achieves maximum distortion, and is folded up into this heart shaped form, the disk cannot be seen from any angle vantage point – and for all practical purposes is invisible. All you could see would be the sky surrounding it.

Interview ( 28DEC89 ) of Bob Lazar at KVEG Radio Station ( below ):

KVEG Radio Incoming Caller: With the gravity generators running, is there thermal radiation?

Lazar: No, not at all. I was never down on the bottom ‘while’ the gravity generators were running, but the reactor itself – there’s no thermal radiation whatsoever. That was one of the really shocking things because that violates the first law of thermodynamics. Lazar: In fact, I’m in the process of fabricating the gravity amplifier, but then I’m at a tremendous shortage for power. So yeah, I have even tried to do that stuff on my own. Caller: Is there any electronics, as we know it, chips or transistors? Lazar: No, nothing like that. Because, of the tremendous power involved too, there was ‘no direct connection between the gravity amplifiers and the reactor’ itself. Caller: Are the waveguides similar to what we use with microwaves? Lazar: Very similar. Caller: In regard to the long-range method of travel, isn’t a ‘propulsion unit’ the wrong idea? I feel this device is creating a situation where it is diminishing or removing the localized gravitational field, and the long-distance body – that they’re heading toward – is actually ‘pulling’ the vehicle rather than it [ the vehicle ] being pushed. Am I correct in this? Lazar: The vehicle is not being pushed. But being ‘pulled’ implies it’s being pulled by something externally; it’s pulling something else to ‘it’. ‘It’ is ‘creating the gravitational field’. Caller: Is there any relation to the ‘monopoles’, which [ scientists ] have been looking for? Lazar: Well, they’ve been looking for the ‘monopole magnet’, but then this [ the UFO force ] is a gravitational force. Caller: What is the top speed of the craft? Lazar: It’s tough to say a top speed because to say ‘speed’ you have to ‘compare distance and time’. And when you’re screwing around with time, and distorting it, you can ‘no longer judge a velocity’. They’re ‘not traveling in a linear mode’ – where they just fly and cover a certain distance in a certain time. That’s the ‘real definition of speed’. They’re ‘bending and distorting space’ and then essentially snapping it back with the craft so, the ‘distances they can travel’ are phenomenal – in ‘little or no time’. So ‘speed has little bearing’. Caller: You’ve mentioned anti-gravity generator and anti-matter generator. Are they different? Lazar: It’s ‘not a gravity generator’ – it’s a ‘gravity amplifier’. I get tongue-twisted all too often. The ‘anti-matter reactor provides the power’ for the craft and the basic ‘low-amplitude gravitational wave’ – ‘too low of amplitude’ to do anything – is ‘piped into the gravity amplifiers’ – found at the bottom of the craft – amplify that to an ‘extremely powerful wave’, and ‘that is what the craft travels along’. But there is ‘an anti-matter reactor’ that ‘provides the power’. Caller: I understand there’s an antenna section in this device; what is the resonant frequency that that operates at? Lazar: The resonant frequency of the gravity wave I ‘do know’ but I don’t know it off hand – I just cannot recall it just now. Mark: Can you give me a ballpark, like 2,000 kilohertz? Lazar: I really don’t remember. It’s a really odd frequency. Mark: Is it measured in kilohertz or gigahertz or megahertz? Lazar: I really don’t remember. Burt: You were talking about the low- and high-speed modes and the control factors in there. Can you describe those modes and what the ship looks like each time it is going through those modes? Lazar: The low-speed mode — and I REALLY wish I could remember what they call these, but I can’t, as I can’t remember the frequency of the wave –The low-speed mode: The craft is very vulnerable; it bobs around. And it’s sitting on a weak gravitational field, sitting on three gravity waves. And it just bounces around. And it can focus the waves behind it and keep falling forward and hobble around at low speed. The second mode: They increase the amplitude of the field, and the craft begins to lift, and it performs a ROLL maneuver: it begins to turn, roll, begins to turn over. As it begins to leave the earth’s gravitational field, they point the bottom of the craft at the DESTINATION. This is the second mode of travel, where they converge the three gravity amplifiers — FOCUS them — on a point that they want to go to. Then they bring them up to full power, and this is where the tremendous time-space distortion takes place, and that whips them right to that point. Burt: Did you actually bench-test a unit away from the craft itself? Lazar: The reactor, yeah. Burt: About how large is this, and could you describe it? Lazar: The device itself is probably a plate about 18-inches square; I said diameter before but it is square. There’s a half-sphere on top where the gravity wave is tapped off of, but that’s about the size of it. Caller Jim ( Las Vegas, NV ): On TV [ television ], you spoke of observing a demonstration of this anti-matter gravity wave controller device. And you made a mock-up copy? Lazar: A friend made one, yeah. Jim: I heard you speak of bouncing golf balls off of this anti-gravity field? Lazar: Yeah. Jim: And also about the candle, the wax, and the flame stood still? Lazar: Right. Jim: And then the hole that you saw appear – Lazar: It wasn’t a hole; it was a little disk. Jim: Under what conditions did you see this demonstrated? Elaborate on this. And how large was the force field? Lazar: The force field where the candle was? Jim: The force field created by the anti-matter device. Lazar: It was about a 20-inch radius from the surface of the sphere. Jim: Where was this area, just above the device? Lazar: Yeah, surrounding the sphere. Jim: Did the sphere surround the device? Lazar: No, the sphere sits in the center of the device. It’s a half-sphere sitting on a plate, and a field surrounds the half-sphere. Jim: And you just place a candle in there? Lazar: No, no, no. That was a separate demonstration. I’m just telling you from where the field extends. Jim: Oh, that’s what I’m curious about. Lazar: No, they tap the field off using a wave-guide, off of the sphere. And this is a completely different setup, where they had a mockup small gravity amplifier, and there were three focused into a point, and that area of focus was probably nine or ten inches in diameter. Jim: They displaced this area or moved this area? Lazar: No, it wasn’t displaced; it’s just where the field was generated. Jim: And in there you put the candle? Lazar: Right. Jim: And that thing can actually bounce golf balls off of it? Lazar: No, no. The golf ball thing, again, had nothing to do with that setup. The golf ball thing had something to do with just when the reactor was energized, before the wave-guide was put on or anything. We were just pushing on the field; it was being demonstrated to me; and we just bounced a golf ball off the top.

Interview Excerpts of Bob Lazar at Seminar; Rachel, Nevada ( 01MAY1993 ), below:

Question: I’m interested in a little bit more about the physics of the power generation from the development of the anti-matter to the Gravity “A” wave and the amplification and the process of generation of that and being able to fold space. Lazar: Well, it’s… I can give you, I guess, a brief overview of essentially how that works. If you want an in-depth description, you can give me your address and I can send you a paper on it. Essentially, what the reactor does is provide electrical power and the base gravity wave to amplify, and it does that by interacting matter and antimatter, essentially. The way it does that is injecting an accelerated proton into a piece of 115. That spontaneously generates anti-hydrogen, essentially. That’s reacted in a small area. It’s a compressed gas, probably compressed atmospheric gas, and the antimatter reacting with matter produces the energy, mainly heat energy, and that is converted into electrical energy by a thermionic [ thermal ion / thermion ] generator that appeared to be 100% efficient, which is a difficult concept to believe anyway. Also, the reactor has two functions. That’s one of them; the other function is, it provides the basic gravity wave that’s amplified, and that appears at the upper sphere of the amplifier itself, and that’s tapped off with a wave-guide, similar to microwaves, and is amplified and focused, essentially. Question: So how is the electrical energy related to the amplification of the gravitational “A” wave energy? Lazar: The electrical energy is transmitted essentially without wires, and I related it to almost a Tesla setup. It seemed like each sub component on the craft was attuned to the frequency that the reactor was operating at, so essentially the amplifiers themselves received the electrical energy, like a Tesla coil transmits power to a fluorescent tube, and what was the rest of the question? Question: Yeah, in other words, what is the relationship between… I think you basically answered it. Lazar: Yeah, that’s how the amplifiers receive the power and through the wave-guide to receive the basic wave. It’s almost…It’s very, very similar to a microwave amplifier… Question: Was the local means of propulsion the same as these across-space distances? What was the local means of propulsion? Lazar: The local means of propulsion is essentially them balancing on a out of phase gravity wave, and it’s not as stable as you would think. When the craft took off, it wobbled to some degree. I mean a modern day Hawker Harrier or something along those lines of vertical takeoff craft is much more stable than then in the omicron [omicrom?] configuration, which is that mode of travel. The delta configuration is where they use the three amplifiers. Those are the only two methods I know about for moving the craft. Question: When you listen to some abduction reports, whether or not people believe it or not, there seems to be a common thread of people being hit by blue beams of light…. Lazar: Any of the three gravity amplifiers could do that, could lift something off the ground, or for that matter compact it into the ground. That’s not a problem, because the craft can operate on one amplifier, in omicron mode, hovering. That would leave the other three (?) amplifiers free to do anything. So I imagine they could pick up cows or whatever else they want to do. On the craft I worked on there was absolutely no provision for anything to come in through the bottom of the craft, or anything along those lines… Question: So what was the course of energy? How did it go from one area to another area? Lazar: The best guess is essentially it operated like a Tesla coil does. A transmitter and essentially a receiver tuned to the transmitting frequency, receives electrical power. There again, that’s not real advanced technology. Tesla did that in the 30s, I think. Question: You mentioned the photon earlier. Do you think that physics is taking a wrong turn by looking for exchange particles, when you’re talking about the strong force of gravity again? I’m not clear why you’re skeptical about the graviton? Lazar: About the graviton? Question: Every other force seems to have exchange particles connected with it. Lazar: No, not necessarily. I mean, they make it have one, but as time goes on, that really hasn’t held true. The bottom line is, they don’t…First of all, they don’t even believe there’s a graviton anymore, so I’m not the only one. As far as exchange particles, still, though some of them like the zeta particle, maybe that’s an actual thing, but when they’re looking at transfers of energy, I think these are scapegoats for the most part. A lot of experiments that I was doing at Los Alamos essentially were along these same lines, but other exchange particles like the intermediate vector bozon, I don’t believe that thing exists. I really don’t. I think they’re grabbing at straws and just coming up with excuses. Question: What about the small gravity, the Gravity “A”; how can you detect that one? What is the frequency of that? Lazar: Well, the frequency that the actual reactor operates at is like 7.46 Hertz. It’s a very low frequency. Question: That’s the frequency of Earth’s gravity, or universally, all gravity? Lazar: That’s the frequency the reactor operates at. Question: I can understand a reactor functioning – theoretically I can understand a reactor functioning at, say, (unintelligible word) 7.46 Hertz. There’s a wave-guide involved. I don’t buy 7.46… Lazar: No, that’s the basic… The frequency of the gravity wave that’s produced, it has to be higher frequency, because you’re in a microwave range to follow a conduit like that. Question: I understand from Lear’s lecture that it had a tendency to conduct on the outside also of the reactor. Lazar: Right. Well, that’s all… this was the electric field we were talking about. The basic frequency, I think, was the way the reactor was operating. The pulses that we detected out of it were probably, instead of a straight DC power supply, it was more along the lines of a pulse, as if we were getting a burst of particles coming out: An antimatter emission, then a reaction, a pulse of energy, and that would repeat. That’s about seven and a half Hertz, something along those lines. Question: Bob, the microwave frequency going to the wave-guide is electromagnetic, or that’s gravitational? Lazar: They’re one in the same. Question: I don’t understand what you mean by that. Lazar: Gravity is… Unfortunately, physics hasn’t gotten to that part yet, but gravity essentially is part of the electromagnetic spectrum. Question: Then what frequency is it? Lazar: That’s something I’m reserving for myself. Question: Something about the microwave range? Lazar: Something about the microwave range. Well, you can sort of figure it out by the dimensions of the waveguide itself, and that’s about it. Question: Positive energy versus regular photon? Lazar: No, it’s not photon. Question: Electromagnetic Energy? Lazar: Right. I’m not trying to be secret, but this is part of the equipment that I’m working on, and I want to get it operating before… Question: I hope we’ll find out one day. Lazar: Absolutely.

Speech of Edgar Fouché at International UFO Congress ( Summer 1998 ), below:

[ photo ( above ) LOCKHEED SR-71 Blackbird surveillance aircraft ( click to enlarge ) ]

I’m here to speak about government technology, special programs, and the TR-3B Flying Triangle. Thousands of sightings, of the Flying Triangle, have been reported, photographed, and investigated around the world – The USAF denies having such a vehicle. It also denies having replaced the Strategic reconnaissance spy plane – the SR-71 Blackbird. Keep this in mind as I proceed:

Astronauts Edgar Mitchell and Gordon Cooper say that new investigations are warranted in UFOs.

Edgar Mitchell, who became the sixth [ 6th ] man on the moon during the Apollo 14 mission said, “The evidence points to the fact that Roswell [ New Mexico crash of UFO ] was a real incident and that indeed an alien craft did crash and that material was recovered from that crash site.” Mitchell doesn’t say he’s seen a UFO, but he says he’s met with high-ranking military officers who admitted involvement with alien technology and hardware.

Gordon Cooper told a United Nation ( UN ) committee recently; “Every day in the USA, our radar instruments capture objects of ‘form’ and ‘composition’ unknown to us.” Cooper speculates public skepticism, toward UFOs, will shift dramatically.

Now, a little about my background:

I’ve held positions within the United States Air Force ( USAF ) that required me to have Top Secret and ‘Q’ clearances, and Top Secret Crypto access clearances.

I’ll show you pictures of some of aircraft programs I’ve worked. I’ll also show you some pictures of classified aircraft. And I’ll share with you some of the information and stories I’ve gathered through my research in developing [ my book ] Alien Rapture. In many cases I’ve been able to obtain actual details of this [ my book, “Alien Rapture – The Chosen” ] black technology.

I was born to fifth [ 5th ] generation French-Americans, and many of my relatives – for generations – have historically been involved with the government in fields of intelligence, black programs, cryptography, and classified development projects.

This is true, as far back as the French revolution, where Joseph Fouché was Prime Minister under Napoleon. He was the head of the French secret National Police Force and was a direct ancestor of mine. Joseph Fouché started and controlled the world’s first professionally organized intelligence agency with agents throughout Europe.

The CIA, the Russia KGB ( now FSB ), the UK MI-5 ( MI-6 ), Israel’s Mossad, and many other intelligence agencies have used and expanded on his methods of intelligence gathering, networking information, and political survival.

I have also worked intelligence and cryptography related programs, but because of oaths of secrecy, I will ‘not be able to share any details of this work’.

My career background spans 30-years, and since the government isn’t about to support my claims, you will see from the positions I’ve held and the Programs I worked that I was in a position to gather the information I am presenting.

Before, I gave a presentation to the International UFO Congress in Laughlin, Nevada during August [ 1998 ], I brought over 200 documents as an offer of proof to substantiate my credibility. These documents contained information on the positions and assignments I held in the U.S. Air Force and as a DOD [ U.S. Department of Defense ] contractor. They also detailed clearances I held, classified and non-classified ( DOD and military ) technical training I received ( over 4,000 hours ), and performance reviews from 1968 to 1995.

As a civilian, from 1987 to 1995, I performed as engineering program manager, site manager, and Director of Engineering for several DOD contractors.

Ken Seddington, of the International UFO Congress, Jim Courrant, a UFO investigator, and Tim Shawcross and John Purdie, of Union Pictures in London, England viewed these documents. Some of these documents are shown in “Riddle of the Skies” special, which will be on The Learning Channel next month.

With my training and experiences with intelligence equipment, special electronics, black programs, and crypto-logical areas, I received other government opportunities. I filled positions as major command liaison, headquarters manager, and DOD factory representative for TAC, SAC, ATC, and PACAF following the Viet Nam War.

Later in my career, as a manager of defense contractors, I dealt with classified ‘black programs’ developing state-of-the-art electronics, avionics, and automatic test equipment [ ATE ].

I was considered an Air Force expert with classified electronics counter-measures test equipment [ ATE ], certain crypto-logical equipment – owned by the National Security Agency – and Automatic Test Equipment [ ATE ].

I’ve worked with many of the leading military aircraft and electronics manufacturers in the U.S. At different times I participated as a key member in design, development, production, and flight operational test and evaluation in classified aircraft development programs, state-of-the-art avionics, including electronic countermeasures, satellite communications, crypto-logic support equipment.

During my military career, I was ‘hand picked’ ( Development Cadre ) for many of the Air Force newest fighter and bomber development programs. I also represented many of these programs for TAC, SAC, PACAF, and ATC.

Other research and development programs I worked as far back as the 1970s are still classified Top Secret.

My involvement with black programs, developing stealth aircraft, is classified.

I am perhaps the only person who has actually worked at the Top Secret Groom Lake Air Base, within Area 51 of the Nellis Range, and has proved that I had the position, training, and clearances to be there.

[ photo circa: 1974 ( above ) DOD DARPA Have Blue Project F-117 stealth fighter ( click to enlarge ) ]

This [ NOTE: ‘not photo above’ ] is a F-117 Stealth fighter being readied at Groom Air Base at night. Notice the fog engines in work for cover.

My last position for the Air Force was as a Strategic Air Command Headquarters’ Liaison. As a Defense Contractor-Manager, I performed as an engineering program manager and site manager for DOD contractors involved in classified development, logistics support, electronic engineering, and technical data development from 1987 – 1995.

I have completely disassociated myself from the defense industry. I consider myself a writer and inventor now.

I undertook this trip to do research for my book Alien Rapture, which included a meeting with five [ 5 ] close friends who had agreed to release confidential information to me and discuss their closely guarded personal experiences.

I also interviewed other contacts that had worked classified programs or flown classified military aircraft to gather information about UFO sightings and contact.

Later, I was blessed to team-up with a great man and a great writer, Brad Steiger. I had decided to get out of the defense industry, as I felt that fraud, waste, and abuse was rampant – both on the government and contractor sides.

Who were the five [ 5 ] friends and co-conspirators and a host of other insiders?

It started when some old friends of mine met in the spring of 1990 in Las Vegas [ Nevada ]. There were five [ 5 ] of us then; all of us had remained close following the Vietnam War. I’ve always been the networker for my DOD, military, and contractor friends so, I’m the one who set up the meeting with the five [ 5 ]:

1. The first friend, Jerald *, was a former NSA or TREAT Team member. T.R.E.A.T. stands for Tactical Reconnaissance Engineering Assessment Team. Jerald * worked for the DOE [ U.S. Department of Energy ] as a national security investigator. That was his cover, but he really worked for the NSA [ U.S. National Security Agency ]. His job required him to manage a team – to ‘watch employees’ with Top Secret and Q clearances in the mid-west, in Los Alamos, Sandia, and White Sands ( New Mexico ), and in the Nevada Test Site and Nellis Range, which includes Area 51. Area 51 is where the most classified aerospace testing in the world takes place. You may know the base as Groom Lake Air Base, Watertown, The Ranch, or Dreamland. He [ Jerald ] was found dead of a heart attack 1-year after our last meeting.

2. The second friend, Sal *, was a person who had worked directly for the NSA [ U.S. National Security Agency ] with Electronic Intelligence ( ELINT ) and became a defense contractor after his retirement.

3. The third friend, Doc *, was a former SR-71 spy plane pilot and USAF test pilot at Edwards Air Force Base [ California ].

4. The fourth friend, Dale *, and I were in the service together during the Viet Nam conflict [ war ], and I’ve known him [ Dale ] since the early 1970s. His father worked for over 20-years for the NSA [ U.S. National Security Agency ] and he [ Dale ] is the one who sent me the MJ-12 [ Majestic ] documents his father had obtained. These documents, the New MJ-12 Charter signed by proxy during the Reagan [ Ronald Reagan ] Administration and Attachment D to the Eisenhower [ U.S. Army General Dwight D. Eisenhower ] MJ-12 briefing document, which is the Autopsy Report from Roswell [ New Mexico ], are included as attachments in my book Alien Rapture.

5. The fifth friend, Bud *, was a DOD contractor and electronics engineer. He [ Bud ] had worked on Top Secret development programs dealing with Electronic CounterMeasures [ ECM ], radar homing and warning, ECM ( Electronic CounterMeasure ) jammers, and Infra-Red [ IR ] receivers. He [ Bud ] retired as a program manager and later died of a brain tumor within 30-days after his symptoms appeared.

*All names and identifying factors have been changed.

It bothered each of us that we had experiences with unusual phenomena, extremely advanced technology, and witnessed unidentified aerial contact that had not been previously reported. We sat at a table in a dark corner of the Silver Dollar Saloon and Casino in Las Vegas [ Nevada ], discussing our experiences and swapping knowledge.

In 1990, I had no intention of writing about programs I was involved-with due to the Secrecy Act and classification documents I had signed.

Jerald asked me if I had ever heard of the ‘Flying Triangle’.

Of course I had heard rumors of Delta shaped and bat winged shaped prototypes being tested at Groom Air Base.

He [ Jerald ] said that an early test model – of the Flying Triangle – was sighted by hundreds of people over Hudson Valley, in the mid 1980s there was a major flap in Belgium – the year [ 1989 ] before our meeting [ 1990 ] – and that thousands of people had witnessed the Triangle and F-16 chase that followed. He definitely piqued my curiosity.

Over the next 4-years, each member of the group wrote down as much information as he could remember about unusual phenomena and personal sightings.

From my close friends, came their contacts. I agreed to interview these contacts in person. I interviewed four [ 4 ] other [ LOCKHEED ] SR-71 pilots, two [ 2 ] [ LOCKHEED ] U-2 [ Dragon Lady ] pilots, a [ 1 ] TR-1 pilot, and about two dozen [ 24 ] bomber and fighter jocks [ jockeys ]. None, of the people I interviewed, wanted to be known or quoted – and wanted me to swear never to reveal their names. I have and will continue to honor their wishes.

Many were afraid of what the government would do to them for taking about Top Secret ‘Black Programs’ they were involved with, and others were just worried about losing their retirement pensions.

I’ll Share some of these secrets and unusual phenomena with you:

[ photo ( above ) LOCKHEED A-12 ( single and dual cockpit versions ) ]

The SR-71 was designed as a spy plane for the CIA in the 1960s and designated the A-12.

[ extremely rare photo circa: 1954 ( above ) LOCKHEED YF-12 Prototype ( click to enlarge ) ]   [ rare photo circa: 1958 ( above ) LOCKHEED A-12 ( click to enlarge ) – NOTE: USAF brand new 1958 Edsel station wagon ( blue with white top ) and Dodge Power Wagon ( blue with white top ) pick-up truck ( mid image far right ) ]

[ photo ( above ) LOCKHEED A-11 ( click to enlarge ) ]

The Mach 3 plus aircraft first flew in 1962 [ ? ], taking off from Groom AFB [ ? ] in Area 51.

[ rare photo circa: 1960 ( above ) LOCKHEED SR-71 ( click to enlarge ) ]

Later, once the Air Force operated it as a reconnaissance plane, it was designated the SR-71 BlackBird.

My friend Chuck, an SR-71 pilot, related to me an in-flight incident he experienced in the 1970s. He was returning from a reconnaissance flight, and while at an altitude of 74,000 feet at the speed of almost Mach 3, ( 3 times the speed of sound ) he noticed something flickering in his peripheral vision. Hovering over his left wing tip was a ball of dense plasma like light. It was so bright, that when he stared at it for more than a few seconds, his eyes hurt.

Chuck tried to use his UHF-HF and VHF communications sets to no avail. There was nothing but static. Repeatedly glancing briefly at the ball of light, he watched in amazement as it moved effortlessly about his aircraft.

At one point the light positioned itself a few feet in front of the large spiked cone at the air Intake Inlet. The enormous amount of air rushing into the engines should have sucked in and shredded almost anything in its path, but the light orb was mysteriously unaffected.

The light, he noted, acted in a curious manner, if something inanimate could act at all. It moved from time to time to other parts of the vehicle, staying with him until his approach to Beale AFB in California. He was in sight of the Air Base when the light swung away from his aircraft in a wide arch with ever increasing speed.

Of course, after reading his incident report, his operations commander told him not to ever speak about his experience. When Chuck related the story to me, he told me he was absolutely convinced that the ball of light was controlled by some form of intelligence. I have about two dozen [ 24 ] stories from pilots of similar in flight incidents with UFOs and plasma balls. There have been thousands of reported sightings of plasma balls, energy filled orbs, or foo fighters as they were named during World War II.

In 1944, while fighting the Japanese and Germans, pilots started reporting strange flares and bright orange and red lights. These lights moved rapidly, were under intelligent control, and could come to a complete stop, remain stationary, and then disappear in an instant.

Foo means ‘fire’ in French. The pilots coined the term ‘foo fighters’ for the haunting glowing balls that doggedly paced their jets. Most were unnerved by radical maneuvers the foo fighters that could climb vertically, accelerate, and make high G turns at speeds far beyond any known allied aircraft.

Not far from the Royal Air Force base, MacRahanish, a triangular shaped aircraft was spotted off Western Scotland. MacRahanish has been rumored to be a base for black aircraft operations for a number of years. It’s also a NATO standby base.

RAF personnel have admitted that they have witnessed the operation of large triangular aircraft from RAF Boscombe in February 1997.

It was widely reported that a secret U.S. spy plane crash landed at Boscombe Down in 1994. It had been rumored for some time that the Triangle spotted over Belgium was based at Boscombe Down and Royal Naval Air Station ( RNAS ) Yeovilton where other sightings of the Triangle were reported.

British RAF [ Royal Air Force ] have a long history of close involvement with U.S. aerospace black programs. Key RAF officers and British scientists have been involved at Groom Air Base [ Nevada ] since 1957 and the [ LOCKHEED ] U-2 [ DragonLady ] program.

In 1995 and 1996 the National UFO Reporting Center alone received forty-three [ 43 ] reports:

11 in the State of Washington; 8 in the State of California; and, 18 from other states – from the State of Hawaii to the State of New York

Sightings, of a Triangular aircraft.

A few years ago The British Magazine, UFO Reality, published this information:

“A top BBC [ British Broadcast Corporation ] executive let slip recently that there is a D-Notice on media reporting of the so-called ‘Black Triangle. The executive is the former producer of a very popular BBC science program. He told one of our team that the black Triangle ‘craft’ – first witnessed by the hundreds in the Hudson Valley region of the U.S. in the mid-1980s, then by the thousands in Belgium in 1989 – 1990, and more in Britain – has been ‘heavily D-Noticed’ by the government. For this reason the BBC will NOT be reporting on the enigmatic craft, no matter how many witness reports there are. According to this producer, the government’s restrictive notice on reporting the Triangle, was authorized under secrecy laws, in order to protect secret new military projects.”

From 1973 through 1976, I was home based out of Edwards AFB. It is near Lancaster, California and even nearer to the San Andrus [ mountain range zone ] fault [ plate tectonic earthquake demarcation line ].

Edwards [ AFB ] has a long history with secret technology and experimental aircraft. The YB-49 was flown in 1948 at Edwards AFB which looks a lot like the B-2 Stealth Bomber.

[ photo ( above ) LOCKHEED XB-70 Valkyrie with 4 PRATT & WHITNEY engines ( click to enlarge ) ]

The XB-70 flown in 1964 looks a lot like the still Top Secret SR-75 that, the Air Force says doesn’t exist.

Edwards A.F.B. is the home of the U.S. Air Force Test Pilot School and is responsible for Flight Operational Test and Evaluation [ FOTE ] of the Air Force’s newest aircraft.

Edwards [ AFB ] hosts a number of tenant organizations, from NASA to the Jet Propulsion Laboratory [ Pasadena, California ] facility.

Edwards [ AFB ] developed various versions of the flying wing [ shaped aircraft ] from the B-35, YB-49, B-2 [ Spirit ], and exotic aircraft sometimes ahead of their time – like the XB-70, F-117, and YF-22.

I worked with the F-111 swing-wing bomber, the F-15 air superiority fighter, the F-16 fighter, the A-10 [ Warthog ] close air support attack aircraft, and B-1 stealth bomber. I was involved with these and other classified development programs, when they were just a gleam in some pilot trainee’s eyes.

One night – in the mid 1970s – a long time friend of mine and I were standing on top of the Fairchild A-10 [ Wart Hog ] hanger at Edwards AFB in southern California. It was about 02:00 a.m. and a clear night with millions of stars visible to the naked eye. I noticed a group of stars that seemed to be shifting in color. I pointed out to my friend that the three [ 3 ] bright stars in triangular formation were not part of the big dipper.

We watched as the strobing stars shifted from bright blue to a red yellow color. After a period of about 20-minutes we could tell the objects probably weren’t stars because they were getting larger. This was somewhat unnerving.

* It was further unnerving when ‘the space in-between the enlarging lights began blocking out the stars in the background’. [ See, e.g. beginning of this report, i.e. Victor Valley northeast area of California alongside Interstate 15 ( further above ) ]

We decided it probably was a Top Secret Air Force vehicle of some type, still we weren’t sure. The vehicle had gone from, half the size of the big dipper, to twice its size in under a half hour [ 30-minutes ]  and had moved from the west to the east towards the base.

About the time we could make out a silhouette or outline of the triangular vehicle, the lights – or possibly exhausts – flared brighter and vanished from the sky in an instant.

This experience wasn’t my first sighting, but it was one of the few where I had a witness. In the summer of 1976, I relocated to Nellis Air Force Base – north of Las Vegas. I spent the next 3-1/2 years there. I worked primarily with the F-15, electronics countermeasures, and Automatic Test Equipment [ ATE ]. I had heard rumors of airbases located in the desert, at places called:

Mercury; Indian Springs; and, Others, that didn’t even have names.

Before the collapse of the [ Russia ] USSR ( CCCP ), no one talked about their classified work experience, nor did anyone repeat rumors of Top Secret technology and aircraft.

Most of us who had Top Secret clearances never even told our wives what we were doing, and where we were going – when on these type projects. I once spent 6-months, in Viet Nam, while my ex-wife thought I was attending a classified technical school in Colorado.

The military, in a court of law, actually denied the existence of a classified Air Force Base inside the Nellis Range out in the Nevada Desert. Don’t you know, the Plaintiffs – who had worked at Groom – and their lawyer was surprised to hear this, but that’s another story.

I was one of the few personnel at Nellis who had a Top Secret clearance with Crypto access. I was certified to work on Mode 4 IFF, ( an aircraft system which responded to classified, encrypted codes ). I was also certified to work on other Crypto equipment that I cannot discuss.

Due to a combination of coincidences, and my technical experience, I was requested to temporary assignment at a place that had no name. My commander told me I was to report to an office on the base, and he didn’t have a clue where I was going or what I was going to be working on. And let me tell you, he wasn’t too happy about being left in the dark.

I left one Monday morning long before sunrise. It was 4:30 AM when I boarded a dark blue Air Force bus with all of the windows blacked out. There were 28 other people on the bus, not including the 2 security policemen holding M-16 automatic weapons and the bus driver. We were each told when boarding, “Do not speak on this bus unless you are spoken too.” Not one of us uttered a word. Believe me. There is nothing that can inspire compliance like an M-16 sticking in your face, I assure you!

The bus drove through the desert, this much I know from the poor air-conditioning and the amount of fine dust that came through every crack in the old vehicle for several hours, and it was soon obvious where I was.

In the 1950s, the U.S. government started building the super secret Groom Lake facilities – for the CIA U-2 ( DragonLady ), a high-altitude global surveillance aircraft. Acquired in 1951 – with $300,000,000 million in seed money from the CIA – the site was called S-4, the name Area 51 and Site 4 insiders call the Papoose Lake, Nevada facilities south of Groom Lake, Nevada. Area 51, is located in the north central part of the Nellis Air Force Base ( AFB ) Range designated Area 51. Construction of facilities within the Nellis Range continues even to today.

The SR-71 Blackbird surveillance aircraft, TR-1, F-117 stealth fighter, B-2 bomber, TR-3A Manta, and TR-3B Astra or flying triangle were tested at Groom Lake, Nevada. Under certain circumstances had persons sighted these crafts they would have been looked upon perhaps, as unidentified flying object ( UFO ) extraterrestrial spacecraft.

SR-75, replaced the SR-71 Blackbird.

SR-74 SCRAMP is a drone that appears to ride under the SR-75.

TR-3B Flying Triangle are operated there – as well as other Top Secret prototype and operational aerospace vehicles.

Many of these aircraft have been misidentified as UFOs

When we reached Groom Lake, the bus pulled into a hanger and they shut the doors. The security personnel checked me in – while other security personnel dispatched the others to their places of work. I was given a pair of heavy glasses to wear, which can only be described as looking like welder’s goggles. The lenses were thick, and the sides of the goggles were covered to obliterate my peripheral vision. Once I had these goggles on I could only see about 30-feet in front of me. Anything beyond that distance became increasingly blurred. If an M1 Tank barrel had been pointed at me, from about 50-feet away, I would not have seen it. It was very disconcerting to have to wear those glasses.

The whole time I was there, some 10-days consecutive, followed by several follow-up visits, the routine was the same. Leave, Nellis before sunrise, and return home – from Nellis – after dark every day.

Only once did I get a chance to see the whole base, and that was when I was flown up – from Nellis – in a helicopter to Groom Lake for emergency repairs of their crypto test equipment. For those stationed at Groom Lake, or commuting there daily – with flight schedules posted for classified flights – everyone ‘not cleared for that particular program and flight’ must be off the ramp – inside 30-minutes – prior to the scheduled operation.

A couple of thousand personnel are flown into Area 51 daily, from McCarran Airport in Las Vegas, Nevada and – from Edwards AFB in California – on contractor aircraft. Several hundred commute from Tonopah and Central Nevada via the north entrance near Rachel, Nevada. Other commuters use its [ Area 51 ] south entrance via Mercury, Nevada or Indian Springs, Nevada west of Las Vegas, Nevada.

While at Groom Lake I made contacts and met people from other programs. Over time, a few became friends and we exchanged stories.

On my 3rd day on the job at Groom Lake, I had to remove a module from a multi-bay piece of satellite communications equipment used to support certain special mission aircraft. I noticed while inside the bay checking out the wiring that it contained a sealed unit about the size of a large briefcase. It had a National Security Agency ID plate on it. The nomenclature on the nameplate was, Direct Orbital Communication Interface Link Emitter [ DOCILE ]. I thought this was strange, as the unit was part of a digital communications link used solely to communicate with classified Air Force vehicles. I was unaware – at the time – of any military orbital missions not related to NASA. Remember, this was in the late 1970s – the shuttle didn’t fly until 1981.I disconnected the unit, and – out of curiosity – I removed the rear access cover. To my amazement, there were some half-dozen [ 6 ] large hybrid integrated circuit [ IC processor ] chips inside. The largest chip had over 500 hair-thin leads attached and was approximately the size of a Zippo lighter. The ‘paper inspection stamp’ on the chip was dated 1975.

In 1975, the most advanced processor speeds – on the most classified projects – were equivalent to an IBM 8088 which ran at 4,000,000 million cycles per second, but this unit had a processor speed of 1,000,000,000 billion cycles per second.

It wasn’t until more than a dozen [ 12 ] years had passed before I saw comparable technology with integrated circuit chips, but ‘then’ [ 1975 ] it was at a Top Secret avionics development project at the ITT company.

In the mess hall at Groom Lake, I heard words like:

Lorentz Forces; Pulse Detonation; Cyclotron Radiation; Quantum Flux Transduction Field Generators; Quasi-Crystal Energy Lens; and, EPR Quantum Receivers.

I was told ‘quasi-crystals’ were the key to a whole new field of propulsion and communication technologies.

To this day, I’d be hard pressed to explain to you the unique electrical, optical and physical properties of Quasi-Crystals and why so much of the research is classified.

Even the unclassified research [ on quasi crystals ] is funded by agencies like the Department of Energy [ DOE ] and the Department of Defense [ DOD ].

Why is the U.S. Department of Energy and Ames Laboratory so vigorously pursuing research with Quasi-Crystals?

What is the DOE new Initiative in Surface and Interface Properties of Quasi-crystals?

“Our goal is to understand, and facilitate exploitation of, the special properties of Quasi-crystals. These properties include ( but are not limited to ) low thermal and electrical conductivity, high-hardness, low friction, and good oxidation resistance.” That’s the ‘unclassified’ part.

What are Quasi-Crystals?

In 1984, a paper was published, which marked the discovery of quasi crystals, two ( 2 ) distinctly different metallic crystals joined symmetrically together.

[ NOTE: See, e.g. Project CARET ( 1984 ), at: http://upintelligence.wordpress.com/2010/10/23/secret-extraterrestrial-information-technologies/ ]

By 1986, several Top Secret advanced studies were going on – funded by DARPA – with leading scientists already working in the field.

In classic crystallography, a crystal is defined as a three dimensional ( 3-D ) periodic arrangement of atoms with translational periodicity along three ( 3 ) principal axis’s.

Since Quasi-crystals ‘lose periodicity’ in at least one [ 1 ] dimension, it is not possible to describe them in 3D-space as easily as normal crystal structures. Thus, it becomes more difficult to find mathematical formulae for interpretation and analysis of diffraction data.

After the official release for the discovery of Quasi-crystals in 1984, a close resemblance was noted between icosahedra quasi-crystals and 3-D Penrose patterning.

Before quasi-crystals were discovered, in 1984, British mathematician Roger Penrose devised a way to cover a plane in a non-periodic fashion using two ( 2 ) different types of tiles arranged in a way they obey certain rule match-ups of rhombohedrens [ rhombohedrans ] – instead of the rhombi [ rhombae ] – that later became known as 3D-Penrose Tiling.

12-years later, ‘Penrose Tiling’ became the prototype of very powerful models explaining structures within Quasi-crystals discovered in rapidly quenched metallic alloys.

14-years of quasi-crystal research, established the existence of a wealth of stable and meta-stable quasi-crystals with 5, 8, 10, and 12 fold symmetry with strange ‘structures’ and interesting ‘properties’. New ‘tools had to be developed for – study and description – of these extraordinary materials.

[ NOTE: See, e.g. Project CARET ( 1984 ), at: http://upintelligence.wordpress.com/2010/10/23/secret-extraterrestrial-information-technologies/ ]

I’ve discovered classified research showing quasi-crystals as promising candidates for high-energy storage materials, metal matrix components, thermal barriers, exotic coatings, infrared sensors, high-energy laser applications and electromagnetics. Some high strength alloy surgical tools are already on the market.

One of the stories I was told more than once was that one of the crystal pairs used in the propulsion of the Roswell crash was a Hydrogen Crystal. Until recently, creating a hydrogen crystal was beyond the reach of our scientific capabilities. That has now changed. In one Top Secret Black Program, under the DOE, a method to produce hydrogen crystals was discovered, and then manufacturing began in 1994.

The lattice of hydrogen quasi-crystals, and another material not named, formed the basis for the plasma shield propulsion of the Roswell [ New Mexico, USA UFO crash ] craft and was an integral part of the bio-chemically engineered vehicle.

A myriad of advanced crystallography – undreamed-of by scientists – were discovered by the scientists and engineers who evaluated, analyzed, and attempted to reverse-engineer the technology presented with the Roswell vehicle and eight [ 8 ] more vehicles which have crashed since then.

I wrote down everything I saw, heard and touched in my log – every night before going to bed. By the way, the food at the Groom Lake mess hall was excellent, but what would you expect – there was no cable TV, no alcohol and no women. I guess they figured they’d better do something right.

Later, while back at the base, my routine went on as normal – as did my part-time job that summer at the Silver Dollar Salon [ Las Vegas, Nevada ].

My NSA friend, Jerald, managed a team that investigated and ‘watched’ those with highly classified jobs at the Nevada Test Site and the Nellis Range – among other highly classified facilities – happened to show up.

I met Jerald in 1976 when I was part of a team that moved the F-15 operation from Edwards AFB to Nellis AFB and set up the Joint NAV – AIR AIM – VAL / ACE – VAL program.

He was checking up on a guy who had a drinking problem who worked at the Nevada Test Site ( S-4 ) where they set off underground atomic explosions.

He [ Jerald ] happened to mention a vehicle that could be boosted into orbit and return and land in the Nevada desert. This was during the late 1970s. It was an Unmanned Reconnaissance Vehicle ( URV ) – launched from a B-52 bomber – and used booster rockets to place it in temporary low earth orbit ( LEO ) for taking high-altitude reconnaissance photographs. I thought he was feeding me a line of bull. Then he said, “This vehicle is remote-piloted ( unmanned ) with communications via the DOCILE system at Groom.”

I’m not usually too slow, but it didn’t hit me until he repeated, “You know, the Direct Orbital Code Interface Link Emitter ( DOCILE ).” Bingo, the light bulb went on, I had seen – at Groom Lake [ Nevada ] a part of the DOCILE equipment – the NSA briefcase sized unit with the large chips.

These are old pictures of the Virtual Reality Laboratory ( VRL ) at Brooks Air Force Base ( BAFB ) where the software – to remotely fly exotic aircraft – was developed.

Let me get back to the development of [ my book ] Alien Rapture – The Chosen.

After I agreed to write my co-conspirator’s story, I talked to several military Judge Adjutant General ( JAG ) lawyers whom were told I wanted to write about some of my experiences in the military and that I had been on many classified projects. I was told, I had to write my story as ‘fiction’, which I have.

I was told, I couldn’t name any real individuals with clearances or covers, or use their working names, which I haven’t.

I was also told, I couldn’t discuss any secrets of programs I had been personally assigned to, which I have ‘not done’ and ‘will never do’.

Then I was told so long as I did that I could damn well write anything I wanted to. Of course I did ‘not’ tell them that I was going to write about the government conspiracy to cover-up UFO contact and reverse engineering of alien technology. Or, that I was interviewing pilots who flew classified ‘air craft’, plus others whom had worked Black Programs.

In the summer of 1992, we again met in Las Vegas, Nevada. I had compiled my notes from our first meeting, my interviews, and input that five ( 5 ) friends had passed on to me, of whom each reached out to ‘their friends and contacts’ uncovering even more information.

We agreed I was the only one who could get away with writing about our experiences, since I was planning on getting out of the D.O.D. ( Department of Defense ) industry.

My friends for the most part, were still connected.

Bud, one of my co-conspirators and close friends, informed me he had a cancerous tumor and was going through some severe depression. He was dead 30-days later. It was a real blow to us. We lost Jerrold, 1-year before, due to his heart attack.

Of the remaining three ( 3 ) friends, Sal dropped off of the face of the earth and none of his – nor my contacts – have been able to locate him for 2-years now. Sal was extremely paranoid about the two ( 2 ) deaths ( of Bud and Jerrold ), and had second thoughts about the book. Sal said he was going to move and didn’t know when or if he would contact me next. I like to think of him sipping a tropical drink on some Pacific island.

Let me talk about my friend Doc, who has a theory that UFOs were drawn-to like fast Aircraft. SR-71 pilot, Doc, whom I knew well, was stationed at Kadena AFB ( KAFB ) where they were located on the SAC ( Strategic Air Command ) side of the base during 1973.

While flying back across the South China Sea, from a reconnaissance mission, SR-71 pilot ( Doc ) encountered a shadow over his cockpit. Doc said his avionics systems went totally haywire, and he felt the aircraft nose down slightly, which can be dangerous at 2,000 miles per hour ( 35 miles per minute ). When the SR-71 pilot looked up, Doc was so startled that he almost panicked and immediately made an evasive maneuver to the right and down – one of many maneuvers made if an approaching missile is detected.

Doc said the object was so big that it totally blocked out the sun. His estimate was that it was 250-feet to 300-feet across, oval shaped and appeared bright blue-grey in color, however Doc wasn’t sure because it also had a shimmering halo of energy surrounding the vehicle.

About 3-minutes later, and some thousands of feet lower, the vehicle reappeared to the side of his left wing tip. He tried his UHF radio and all he could pick up was a deep electrical hum. He abandoned his attempts to use his radio, as his immediate survival was more important.

For the next 10-minutes, the large oval vehicle moved from his left wing tip to the rear of the SR-71 and then to his right wing tip. The movement from the left, to the rear, to the right wing tip took about 2-minutes, and then it reversed the movement. During the UFO’s last swing to the rear of his SR-71 his aircraft started buffeting wildly, which is terrifying at Mach 3, then it stopped after about 15-seconds, and then he never saw it again.

Doc said he heard a sound in his head, ‘like a swarm of bees in my brain,’ as he described it. When Doc returned from the mission he immediately went to his debriefing. The minute he mentioned the incident with the Unidentified Aerospace Vehicle ( UAV ) to his commander, he was pulled away from the debriefing and taken to his commander’s office. His commander, a colonel, filled out an incident report, in detail, and then told Doc not to mention the incident to anyone or he would be subject to severe and speedy penalty under military regulations.

Doc told me he knew of no single other SR-71 pilot or astronaut who hadn’t had a close encounter or a UFO sighting. Doc felt none would ever go on record with their experiences because of fear of retaliation from the U.S. Department of Defense ( DOD ) and loss of their retirement pay and benefits for breaking the U.S. oath of Secrets.

During the 9-years after this in-flight incident, Doc related that a few of his trusted friends related similar incidents, with the same type of vehicles, or glowing orbs of dense light, dancing around their SR-71.

Then Doc told me another story about his friend Dave, another SR-71 Blackbird pilot, who – while drunk on Sake in Japan – told him in whispers that he hadn’t been a drinker until he made a reconnaissance flight over the Eastern border of Russia 6-months earlier when Dave returned delirious and semi-conscious.

Dave’s SR-71 crew had to pull him out of the cockpit. The Flight Surgeon attributed his symptoms to loss of oxygen. Dave didn’t share ( with U.S. Air Force base physicians ) his nightmares, for fear the Flight Surgeon would ground him and possibly lose his flying status, however under the influence of alcohol – in a quiet bar – with a trusted fellow SR-71 blackbird pilot ( Doc ) friend, Dave opened up.

Dave tearfully related – in an emotional story – having frequent nightmares that something had gotten to him during his flight over Russia. For Dave, what made matters worse was he had absolutely no memory of the flight from the time he lifted off from Osun Air Base ( Korea ) until the day after he returned and found himself in the Naval Regional Hospital in Okinawa, Japan.

I managed to track down Dave, who lives in Southern California, who confirmed off-the-record the incident relayed by Doc to me was true. Dave said he was actually happy someone was writing about stories of contact and sightings by military pilots, and was sure he had some type of contact with a UFO.

One day, while still at Nellis Air Force Base test range S-4, we were informed there was an F-15 that crashed on the Nellis Range, where Area 51 is located. The F-15 crash happened in 1977. A Lieutenant Colonel and Doc Walters, the Hospital Commander, actually flew into the side of a mountain while doing a routine Functional Check Flight.

A sergeant, who worked for me, recovered the F-15 Heads-Up Display film canister while he had earlier been assigned to the Accident Investigation Team. He told me that a guy ( in a dark jump suit ), who was out of Washington D.C. personally took that film canister away from him – unusual because everything else was picked up and logged, first, then taken back to the assigned hanger for analysis – in-addition to a ‘prototype video camera’ ( also on the aircraft ) and flight data recorder; all handed over to the guy from Washington, D.C.

One night a couple of weeks after the crash, my U.S. National Security Agency ( NSA ) friend, Jerald ( Gerald ? Jerrold ? ) relayed to me – at the Silver Dollar Saloon ( Las Vegas, Nevada ) – that the aforementioned Lieutenant Colonel radioed the Nellis Tower that he had an ‘extremely large thing’ hovering his aircraft and was experiencing loss of flight systems. His communications went dead and a few seconds later his aircraft exploded into the side of a mountain-top.

Jerald, who was the most ‘connected’ person I’ve ever known, told me viewing the ‘video’ showed some type of oval vehicle – of tremendous size – was so close to the F-15 its camera appeared to have gone out of focus.

When Doc and the Lieutenant Colonel ejected, the UFO was still above them, their bodies were torn to shreds. Officially, it was determined – as always the case – that ‘pilot error’ caused the perfectly functional aircraft – in clear airspace with maximum visibility – to crash.

Nevada calls itself the silver state, the battle-born state, and the sagebrush state. A more appropriate motto would be, the conspiracy state.

Of the 111,000 square miles of land in Nevada, over 80% is controlled by the federal government – the highest percentage of any state in the U.S. If it were not for the gaming industry in Nevada, the federal government would be the largest employer in the State of Nevada, with 18,000 federal and military personnel and another 20,000 government contractors and suppliers.

The Nevada Test Site, Nellis Air Force Base and Range [ Nevada ], Fallon Naval Air Station [ Nevada ], the Tonopah Range, and the aerospace industry eat-up a lot of U.S. tax dollars.

The Nevada Test Site and the Nellis Range have myriad secrets yet to be revealed, including a super secret laboratory named DARC, part of the Defense Advanced Research Center ( DARC ).

DARC is located, inside the Nellis Range, 10 stories underground.

DARC was built in the 1980s with SDI ( Strategic Defense Initiative ) money.

DARC is next to a mountain – near Papoose Lake, south of Groom Lake, Nevada – where the TR-3Bs ( TIER III B ) are stored in an aircraft hanger built into a side of a mountain near DARC. The Nellis Range covers more than 3,500,000 million acres.

EG&G, a government contractor, provides classified research and development services for the military and government, and supplies technical and scientific support for nuclear testing and energy research and development programs.

EG&G provided large diameter drilling, mining and excavation for DARC underground and mountainside facilities.

EG&G built these DARC hidden bunkers, mountain hangers and vast underground facilities at Groom Lake, Papoose Lake, and Mercury for the government – facilities and observations posts well camouflaged inside the Nevada Test Site and the Nellis Range.

Starting in 1971 and continuing through 1975, a massive amount of excavation took place at the Groom Lake facility and Papoose Lake facility where most subsequent construction also took place underground.

In 1972, EG&G was granted an ‘indefinite contract’ called “Project Red-Light” to support the U.S. Department of Energy ( DOE ) and the military. This contract gave EG&G responsibility to assist in the recovery of nuclear materials in cases of mishaps and to provide aerial and ground security for highly classified government and military sites.

My sources say DOE and NSA are primary responsibles, to the MJ-12 committee, for reacting to UFO sightings and recovery of UFO artifacts in cases of crash.

So what’s going on more recently, you may ask? Let’s talk about the newest secrets and rumors:

  [ photo ( above ) AVRO VZ-9 AeroCar ( 1957 BELL Laboratory ( Canada ) contractor for CIA ) ]

The Hillary platform, AVRO saucer and NORTHROP sections where aerospace vehicles with advance technology was developed and tested – each emulated some characteristic of UFOs as described by the late Dr. Paul Hill, a NASA UFO investigator. Hill’s posthumously published book, Unconventional Flying Objects, discusses UFO technology extensively. If you have not read this illustrious tome, I suggest you do so.

Newly unclassified documents show that AVRO [ Canada ] built and tested a number of saucers – at Area 51 in Nevada  [ USA ] – contrary to DOD lies that the program was canceled because it failed to meet expectations. LOCKHEED Advanced Developmental Projects Division ( ADPD ) – also known as the “Skunk Works” – developed the A-12 – for the U.S. Central Intelligence Agency ( CIA ) – and a later version, the SR-71 for the USAF, in the early 1960s.

30-years later, the SR-71 was still breaking world speed records. The sleek matte-black stiletto shaped spy plane SR-71 Blackbird – traveling 2,000 miles in 1-hour 4-minutes ( including multiple orbits, touch-down and take-off ) – broke the world air speed record from Los Angeles, California to Washington, D.C. on its retirement flight in 1990.

Area 51 ( Groom Lake Air Base Nevada facilities ) has a 6-mile long runway, the longest in the United States where U.S. Department of Defense ( DOD ) and CIA most exotic aerospace vehicles are tested and modified. Area 51 is a place where curious outsiders circulate rumors about aliens and extra-terrestrial technology being utilized to accelerate the various programs at Area 51.

Why a 6-mile long runway? You need a runway this long if the minimum, or stall speed, of an aircraft is a very high speed. Aircraft ( without wings ), like wedge shaped lifting bodies or those with 75 degree swept back wings, have very high stall speeds, and while they launch-lift very fast, they land even faster.

My sources estimate that up to 35% of SDI ( Strategic Defense Initiative – “Star Wars” ) funding was siphoned-off to provide primary expenditures for the U.S. Air Force most secret ‘Black Program’, begun in 1982, the AURORA Program.

AURORA, the Program, ‘is’ the ‘code name’ of the ‘ongoing Program’ to have advanced aerospace vehicles built and tested.

AURORA, contrary to popular belief, is ‘not’ the name of an individual aircraft.

AURORA Program is the namesake of “aurora borealis” – excited gas in the upper atmosphere. As early as 1992 the Air Force had already made contingency plans to move some of it’s aircraft out of Groom Air Base ( Area 51 ) where the public eye was on the base and they ( government officials ) did ‘not’ like that one bit.

Everything, needing a long runway, like the SR-75 was removed by early 1992 to other bases in Utah, Colorado, Alaska, Greenland, Korea, Diego Garcia, and remote islands in the Pacific.

Short take-off and landing vehicles ( STOL ), especially the bat wing TR-3A ( TIER III A ) Manta and TR-3B ( TIER III B ) Astra – the ‘Flying Triangle’ – were relocated to Papoose Lake, Nevada ( USA ) in the southern part of the Nellis Air Force Base ( AFB ) Test Range Site called Area S-4. Other than the SR-75 – still being dispersed to other locations – more research and development ( R&D ) and flight operational test and evaluation continues in Area 51, more-so now than ever before.

For the last few years high-tech buffs speculated that at least one ( 1 ) new and exotic aerospace vehicle existed, the SR-75 – first operational AURORA Program vehicle that went operational in 1989, 2-years ( 1987 ) ‘after’ flight testing and modifications were complete.

SR-75 is a hypersonic strategic reconnaissance ( SRV ) spy plane called the Penetrator, a mother ship, that I will explain shortly.

Hypersonic speeds start at approximately Mach 5.

SR-75 replaced the SR-71 spy plane, retired in 1990 by the U.S. Air Force that said, “there is no replacement, all we really need is our spy satellites to do the job.” Hmm?

The DOD, upon analysis of Desert Storm, admitted satellites ( alone ) could not provide the necessary quick response real-time reconnaissance information needed by various military agencies. Yet they have repeatedly fought some U.S. Congressional efforts to bring back the SR-71. Why? The answer should be obvious.

SR-75, is better capable of global positioning in less than 3-hours, equipped with multi-spectral sensors ( optical, radar, infrared, and laser ) collecting images, electronics intelligence ( ELINT ), signals intelligence ( SIGINT ), illuminating ( lighting-up via laser ) targets, and more.

SR-75, far exceeds classified military speed and altitude records set by predecessor SR-71 flight records – still classified Mach 3.3+ ( confidently add another 3 to 4 Mach + ) with ceiling altitudes of ( at least ) 85,000 feet.

SR-75, attained altitudes of over 120,000 feet and speeds exceeding Mach-5 ( 5 times faster than the speed of sound ) – equating to more than 3,300 miles per hour.

SR-75, from take-off to landing, can make the round trip from central Nevada to Northeast Russia and back in under 3-hours.

SR-75, is 162-feet long and has a wing span of 98-feet.

SR-75, fuselage begins at 10-feet above ground.

SR-75, carries a flight crew of three ( 3 ), i.e. pilot, reconnaissance officer and launch control officer, the latter of whom doubles as the SR-75 Penetrator electronics warfare ( EW ) officer. Two ( 2 ) methane and LOX fueled high bypass turbo-ramjet ( combined cycle ) engines – housed under each wing – where bays run 40-feet thereunder, terminating at the trailing edge of the wing.

SR-75 Penetrator Pulse Wave Detonation engines ( 2 ) push speeds above Mach 5 – now reported Mach 7+ or 4,500-mph ( miles per hour ) with now new engine modifications. Although this plane has been sighted on numerous occasions, picked up on military radar, the pulse wave detonation contrail has been seen, but the Air Force vehemently denies its existence.

SR-75 large Pulse Wave Detonation engine bay inlets ( located under each wing of the black mother ship – hang downward 7-feet under the wings ) – are 12-feet wide.

SR-71 Blackbird, SR-75 Penetrator ( black mothership ), and its SR-74 daughter ship SR-74 ( Scramp ) were all built by LOCKHEED Advanced Development Company ( Skunk-Works ).

SR-74 Scramp, daughtership of the SR-75, is a ‘drone’ launched from ‘under’ the SR-75 black mothership fuselage.

SR-74 Scramp, after launch, utilizes a supersonic combustion ram-jet engine ( Scramjet ).

Jerald witnessed the flight of the big black Air Force SR-75 carrying the little unmanned SR-74 while inside Area 51 where it was initially positioned piggyback ( on its upper raised platform ) atop the SR-75 Penetrator.

[ photo ( above ) LOCKHEED YF-12-A / SR-71 Blackbird drone: D-21 ( drone ) ( click to enlarge ) ]

Remember, the SR-74 Scramp can ‘not’ launch itself from the ground.

SR-75 talk, was heard by me, as far back as the late 1970s when I was at Groom [ Groom Lake ], and I’ve 2 additional friends who saw the SR-75 at Groom Lake, Nevada.

SR-74 Scramp, from its SR-75 mother ship, can only launch at an altitude above 100,000-feet + from where it can then attain orbital altitudes over 800,000-feet ( 151-miles + ).

The U.S. Air Force SR-74 Scramp is for launching small highly classified ‘ferret satellites’ for the U.S. National Security Agency ( NSA ). SR-74 Scramp, can launch at least two ( 2 ) 1,000-pound satellites measuring 6-feet by 5-feet.

SR-74 Scramp is roughly the equivalent size and weight of a F-16 fighter.

SR-74, can easily attain speeds of Mach 15, or a little less than 10,000 miles per hour. NASA Space Shuttle is ‘technologically antique’ by comparison – taxpayer joke.

If you think these rumors are far-fetched, look at the YB-49 and XB-70 flown in 1948 and 1964 respectively, then look at the SR-75 that has been spotted numerous times.

Some say, “The government cannot keep a secret,” wrong statement if some think the government can’t.

There are new rumors that the U.S. placed two ( 2 ) new vehicles in permanent orbit. One of these, being the Space Orbital Nuclear – Service Intercept Vehicle ( SON-SIV ) code name LOCUST.

The SR-74 SCRAMP and the TR-3B [ TIER III B ] can deliver Spare Replacement Units ( SRU ), service fuels, fluids and chemicals to the SON-SIV.

SON-SIV robotic uses those deliverables to service, calibrate, repair and replace parts on the newer NSA, CIA and NRO ( National Reconnaissance Office ) satellites built to be maintained ‘in space’.

Finally, I’ve saved the best for last. The Operational model of the TR-3B [ TIER III B ].

The early information I gathered from interviewing my contacts and their closest friends who worked black programs resulted in the basic specifications of the TR-3B [ TIER III B ] Flying Triangle. I had this simple drawing by late 1990.

On the night of March 30, 1990 a Captain of the Belgium ( Belgique ) National Police decided to pursue incoming reports about a ‘triangular’ shape UFO. Two ( 2 ) radar installations, one a NATO defense group and the other a Belgium ( Belgique ) civilian and military radar verified the triangle UFO.

Excellent atmospheric conditions prevailed, and there was no possibility of false echoes due to temperature inversions. At 5 AM in the morning, 2 dispatched F-16 fighters spotted the Triangle on their radar screens that had locked onto the target.

Six seconds later the object speeded up from an initial velocity of 280 kilometers per hour to 1,800 kilometers per hour; at the same time descending from an altitude of 3,000 meters to 1,700 meters, then down to 200-meters, causing the F-16 radars to lose lock-on. This maneuver happened all in a matter of 1-second. The 40 G acceleration of the Triangle was some 32 gravitational forces higher than what a human pilot can stand.

Contrary to normal aeronautical expectations, no sonic boom was heard. This phenomenal game of hide and seek was observed by twenty [ 20 ] Belgium National Policemen and hundreds of other witnesses who all saw the Triangular vehicle and the F-16 fighters. The chase was repeated twice more during the next hour.

Belgians have made all information of this event public, unlike our U.S. government that only admits nothing and denies everything to do with UFOs – even when some are ours.

[ photo ( above ) another Triangle craft believed over Turkey enroute to Iraq ( click to enlarge ) ]

The TR-3B original photo was taken with a digital camera carried aboard, a special black operations C-130 cargo plane, by a U.S. Air Force Special Operations sergeant who took the picture from aboard the C-130 flying mission support for the TR-3B.

I’ve seen this picture personally and have interviewed several people who worked on the program. I’m sure of my facts and specifications.

You can see for yourselves, that from the Belgium pictures, a computer generated software composite rendition – resulting from the Europe sightings, plus my original schematic – taken from interviews – that this ‘is an accurate rendition’ of the TR-3 [ TIER III ].

From the ‘original digital photograph’ of the TR-3B, a ‘computer graphic enhanced representation’ – made using 3D graphic rendered software – hangs on the wall inside a ‘black vault’ at the AURORA Program Office. I am not at liberty to divulge further details about the digital picture except to say a friend took a great career risk taking it and showing it to me.

We have used these highly accurate computer graphic pictures of the Prototype and Operational models of the TR-3B to get further verification of their accuracy. You will not get a clearer picture of what the Flying Triangles are until one lands amidst ‘public domain’ and is captured by CNN or other news media.

Jerald said he would never forget the sight of the alien looking TR-3B based at Papoose Lake, Nevada where the pitch black triangular shaped craft was rarely mentioned – and then only in hushed whispers – at the Groom Lake facility where he worked.

TR3B [ TIER 3 B ] flew over the Groom Lake runway in complete silence, and magically stopped above Area S-4 where t hovered ( silently for about 10-minutes ) silently in the same position before gently settling vertically to the tarmac.

At times a corona of silver blue light glowed around the circumference of the massive TR-3B. The operational model of the TR3-B is 600-feet across.

TR3B ‘prototype’ ( 200-foot ) and TR3B ‘operational’ ( 600-foot ) version crafts are code named ASTRA.

TR3B tactical reconnaissance version first [ 1st ] ‘operational flight’ was in the early 1990s.

TR-3A Manta [ TIER III A Manta ] is a subsonic reconnaissance vehicle shaped like a bat wing and is in no way related to the TR-3B.

The nomenclature for the TR-3B is unconventional and named thusly to confuse those tracking black budget Programs, Projects and rumors leaked to be purposefully confusing as most are in the aerospace industry where one would think there ‘must be a relationship’ between the TR-3A and the TR-3B, although there is ‘no relationship’.

TR3B triangular shaped nuclear powered aerospace platform was developed under the Top Secret AURORA Program with SDI ( Strategic Defense Initiative – Star Wars ) and black budget monies. At least three [ 3 ] TR3B, $1,000,000,000 billion dollar +, were flying by 1994.

AURORA is the most classified aerospace development program in existence – with its TR-3B being the most exotic vehicle created from the AURORA Program – funded and operationally tasked by the National Reconnaissance Office ( NRO ), NSA, and CIA.

TR-3B flying triangle is ‘not fiction’ and was built with technology available in the mid 1980s and uses more reversed alien technology than any vehicle ever before, but ‘not’ “every” UFO spotted is one of theirs.

TR-3B vehicle outer coating is electro-chemical reactive, that can fluctuate from radar electrical frequency ( EF ) stimulation, causing reflective color spectrum changes from radar absorption.

TR3B is the first [ 1st ] U.S. aerospace vehicle employing quasi-crystals within its polymer skin used in conjunction with TR-3B Electronic Counter Measures ( ECCM ) that can make the vehicle look like a ‘small aircraft’ or ‘flying cylinder’ that also fools radar receivers into falsely detecting a either a ‘variety of aircraft’, ‘no aircraft’ or ‘several aircraft placed in various locations’.

Electro-chromatic polymer skins can be uncovered performing research that points to stealth material properties.

A couple in Ohio spotted the Triangle in early 1995, was first spotted by a man indicating its orange ball of light and then triangle shape with three [ 3 ] bright spots at each corner. As it moved slowly southward they were awestruck by the enormous size. “The damn thing is real,” he exclaimed to his wife, “It’s the flying triangle.” The man said it was the size of “two [ 2 ] football fields” – making it 200 yards or 600-feet across – same as the TR3B operational version craft.

From the collection of pictures, analysis and further refinement we now have a better schematic layout of the Top Secret USAF Flying Triangle [ TR 3 B ] seen by thousands that the U.S.  Department of Defense ( DOD ) and U.S. government claims ‘does not exist’.

A circular, plasma filled accelerator ring called the Magnetic Field Disrupter, surrounds the rotatable crew compartment, far ahead of any imaginable technology.

Sandia & Livermore U.S. National Laboratories developed reverse engineered MFD technology

The government will go to any lengths to protect this technology. The plasma, mercury based, is pressurized at 250,000 atmospheres at a temperature of 150 degrees Kelvin, and accelerated to 50,000 rpm to create a super-conductive plasma with the resulting gravity disruption.

MFD generates a magnetic vortex field, which disrupts or neutralizes the effects of gravity on mass within proximity, by 89 percent. Do not misunderstand. This is ‘not’ anti-gravity. Anti-gravity provides a ‘repulsive force’ that can be ‘used for propulsion’.

The MFD creates a disruption – of the ‘Earth gravitational field’ – upon the mass within the circular accelerator.

The mass of the circular accelerator and all mass within the accelerator, such as the crew capsule, avionics, MFD systems, fuels, crew environmental systems, and the nuclear reactor, are reduced by 89%.

A side note to the Magnetic Field Disruptor development; one source that worked at GD Convair Division in the mid 1960s described a mercury based plasma that was cooled to super-conductive temperatures, rotated at 45,000-rpm and pressurized at thousands of atmospheres. This would be considered state-of-the-art technology even by today’s standards, some 30 years after he worked this project.

He related that the project achieved its objective. Instruments and test objects within the center of the accelerator showed a 50 percent loss of weight, attributed to a reduction in the gravitational field. He had worked on MFD as far back as 1965 and was told by a senior scientist that the research had been going on for a decade. See: Convair, notes from Gravitics research and Gravity RAND article.

The current MFD in the TR-3B causes the effect of making the vehicle extremely light, and able to outperform and outmaneuver any craft yet constructed, except of course, the UFOs we did not build.

The TR-3B is a high altitude, stealth, reconnaissance platform with an indefinite loiter time. Once you get it up there at speed, it doesn’t take much propulsion to maintain altitude.

At Groom Lake there have been whispered rumors of a new element that acts as a catalyst to the plasma.

Recently NASA and Russia have admitted breakthroughs in technology that would use a plasma shield for exotic aerospace vehicles. If you know anything about the history of classified black programs, you know that by the time NASA starts researching something, it’s either proven or old technology. They are the poor stepchildren when it comes to research and development technology and funding.

With the vehicle mass reduced by 89% the craft can travel at Mach 9, vertically or horizontally. My sources say the performance is limited only the stresses that the human pilots can endure. Which is a lot, really, considering along with the 89% reduction in mass, the G forces are also reduced by 89%.

The crew of the TR-3B should be able to comfortable take up to 40Gs. The same flight characteristics described in the Belgium sightings and many other sightings. Reduced by 89%, the occupants would feel about 4.2 Gs.

The TR-3Bs propulsion is provided by 3 multimode thrusters mounted at each bottom corner of the triangular platform. The TR-3 is a sub-Mach 9 vehicle until it reaches altitudes above 120,000 feet – then who knows how fast it can go.

The three [ 3 ] multimode rocket engines mounted under each corner of the craft use hydrogen or methane and oxygen as a propellant. In a liquid oxygen / hydrogen rocket system, 85% of the propellant mass is oxygen.

The nuclear thermal rocket engine uses a hydrogen propellant, augmented with oxygen for additional thrust.

The reactor heats the liquid hydrogen and injects liquid oxygen in the supersonic nozzle, so that the hydrogen burns concurrently in the liquid oxygen afterburner.

The multimode propulsion system can operate in the atmosphere, with thrust provided by the nuclear reactor, in the upper atmosphere, with hydrogen propulsion, and in orbit, with the combined hydrogen\ oxygen propulsion.

What you have to remember is that the 3 multi-mode rocket engines only have to propel 11% of the mass of the Top Secret TR-3B. Rockwell reportedly builds the engines.

From the evolution of exotic materials, advanced avionics, and newer propulsion engines the stealth aircraft were born. Leaps in technology have been obtained with reverse engineering of Alien Artifacts as described in the newly released MJ-12 Revised Charter, signed during the Reagan [ Ronald Reagan ] Administration.

According to Jerald’s account, the technology developed at Papoose far exceeded any known within the world scientific community. Jerald was in his late 50s when I first met him in LV. He had actually spoken to scientists who analyzed the Roswell vehicle and technology–technology that we can assuredly assume was developed from reverse engineering of recovered alien artifacts. The control of all Alien Artifacts, i.e. the research, the reverse engineering, and analysis of the Extraterrestrial Biological Entities (aka) EBE, were transferred to the super-secret laboratory, called the Defense Advanced Research Center ( DARC ), in Area S-4.

Many sightings of triangular UFOs are not alien vehicles but the top secret TR-3B. The NSA, NRO, CIA, and USAF have been playing a shell game with aircraft nomenclature.

Creating the TR-3, modified to the TR-3A, the TR-3B, and the Tier 2, 3, and 4, with suffixes like Plus or Minus added on to confuse even further the fact that each of these designators is a different [ Unmanned Aerial vehicle ( UAV ) ] aircraft, and not the same aerospace vehicle. [ See, e.g. “ DARPA Photos ” on this website for more on UAV and UCAV with Tier designations provided. ]

A TR-3B is as different from a TR-3A as a banana is from a grape. Some of these vehicles are manned and others are unmanned.

Before Jerald died, we had a long conversation. He was sure he had documentation that would prove the existence of the MJ-12 committee and our using crashed alien vehicles to reverse engineer their technology. I told him that I did not want any classified documents in my possession. I never found out what happened to them.

I also believe the recently deceased Colonel Corso, who discloses the government’s involvement with alien technology, was an honest and honorable man. I believe he was on the inside of administering alien artifact protocol for the Army, and he might have embellished the depth of his involvement.

I don’t have time to go into the two [ 2 ] unique MJ-12 documents that I acquired.

The characters in [ my book ] Alien Rapture are fictional, but the facts of the covert government agenda to suppress alien artifacts, reverse engineering of alien [ ExtraTerrestrial ] technology, and the details of Black Programs are absolutely true.

Part of our agreement was that every one of my close five friends would get a chance to look at the manuscript before I sent it to any Literary Agents.

Dale discussed the Alien Rapture manuscript with his father, who worked high up in the NSA for over 20-years. His father asked him how much he trusted me, and Dale told him ‘completely’.

Dale’s father provided him with two MJ-12 documents, and told him to retype them, and send them to me with the understanding that I would not ever reveal the source of them.

The documents that Dale retyped had most the names and dates blacked out. This is how I received these documents, and I was so naive that I didn’t even know what the histories of the MJ-12 documents were.

From as far back as the Viet Nam conflict [ war ], I knew Dale and that he was as close to his father as a son can get. I do not feel that his father used him for distributing disinformation. Whether his father was duped, I have no idea. From my personal opinion, I believe the MJ-12 committee was real, and still exists in some form.

The TR-3B’s anti-gravity physics is explained, insofar as the theory of general relativity can be considered as an explanation for anti-gravity flight anomalies.

Edgar Fouché describes (above) the TR-3B’s propulsion system as follows: “A circular, plasma filled accelerator ring called the Magnetic Field Disrupter, surrounds the rotatable crew compartment and is far ahead of any imaginable technology…

The plasma, mercury based, is pressurized at 250,000 atmospheres at a temperature of 150 degrees Kelvin, and accelerated to 50,000 rpm to create a super-conductive plasma with the resulting gravity disruption.

The MFD generates a magnetic vortex field, which disrupts or neutralizes the effects of gravity on mass within proximity, by 89%…

The current MFD in the TR-3B causes the effect of making the vehicle extremely light, and able to outperform and outmaneuver any craft yet …My sources say the performance is limited only the stresses that the human pilots can endure. Which is a lot, really, considering along with the 89% reduction in mass, the G forces are also reduced by 89%.

The crew of the TR-3B should be able to comfortable take up to 40Gs… Reduced by 89%, the occupants would feel about 4.2 Gs.

The TR-3Bs propulsion is provided by 3 multimode thrusters mounted at each bottom corner of the triangular platform. The TR-3 is claimed by some to be either a sub-Mach-9 up to a Mach 25+ lenticular-shaped vehicle until it reaches altitudes above 120,000 feet. No one knows really how fast it can go.

Many have been skeptical of Mr. Fouché’s claims however, in an interesting scientific article another claim is made that, the charged particles of plasma don’t just spin uniformly around in a ring, but tend to take up a synchronized, tightly pitched, helical or screw-thread motion as they move around the ring. This can be understood in a general way, where the charged particles moving around the ring act as a current that in turn sets up a magnetic field around the ring.

It is a well-known fact that electrons ( or ions ) tend to move in a helical fashion around magnetic field lines. Although it is a highly complex interaction, it only requires a small leap of faith to believe that the end result of these interactions between the moving charged particles ( current ) and associated magnetic fields results in the helical motion described above. In other words, the charged particles end up moving in very much the same pattern as the current on a wire tightly wound around a toroidal core.

In an article entitled, “Guidelines to Antigravity” by Dr. Robert Forward, written in 1962 ( available at: http://www.whidbey.com/forward/pdf/tp007.pdf ) Dr. Forward describes several little known aspects – of Einstein’s general relativity theory – indicating how moving matter can create unusual gravitational effects. Figure 5, indicates how the moving matter pattern describes what’s necessary to generate a gravitational dipole which was exactly the same as the plasma ring pattern described in the physics article discussed above.

If Fouche’s description is even close to correct, then the TR-3B utilizes this little known loophole in the General Relativity Theory to create it’s antigravity effects. Even though the TR-3B can only supposedly cancel 89% of gravity (and inertia) today, there is no reason why the technology can’t be improved to exceed 100% and achieve true antigravity capability.

In theory, this same moving matter pattern could be mechanically reproduced by mounting a bunch of small gyroscopes – all around the larger ring – with their axis on the larger ring, and then spinning both the gyroscopes and the ring at high speeds, however as Dr. Forward points out any such mechanical system would probably fly apart before any significant anti-gravity effects could be generated. However, as Dr. Forward states, “By using electromagnetic forces to contain rotating systems, it would be possible for the masses to reach relativistic velocities; thus a comparatively small amount of matter, if dense enough and moving fast enough, could produce usable gravitational effects.”

The requirement for a dense material moving at relativistic speeds would explain the use of Mercury plasma (heavy ions). If the plasma really spins at 50,000 RPM and the mercury ions are also moving in a tight pitched spiral, then the individual ions would be moving probably hundreds, perhaps thousands of times faster than the bulk plasma spin, in order to execute their “screw thread” motions. It is quite conceivable that the ions could be accelerated to relativistic speeds in this manner. I am guessing that you would probably want to strip the free electrons from the plasma, making a positively charged plasma, since the free electrons would tend to counter rotate and reduce the efficiency of the antigravity device.

One of Einstein’s postulates of the Theory of General Relativity says that gravitational mass and inertial mass are equivalent. This is consistent with Mr. Fouche’s claim that inertial mass within the plasma ring is also reduced by 89%. This would also explain ‘why the vehicle is triangular’ shaped. Since it still requires conventional thrusters for propulsion, the thrusters would need to be located outside of the “mass reduction zone” or else the mass of the thruster’s reaction material would also be reduced, making them terribly inefficient. Since it requires a minimum of three [ 3 ] legs to have a stable stool, it follows that they would need a minimum of three [ 3 ] thrusters to have a stable aerospace platform. Three [ 3 ] thrusters – located outside of the plasma ring – plus appropriate structural support would naturally lead to a triangular shape for the vehicle.

Some remain skeptical of the claimed size for the TR-3B at being approximately 500 to 600-feet across. Why would anyone build a tactical reconnaissance vehicle almost two ( 2 ) football fields long? However, the answer to this may also be found in Dr. Forward’s paper. As Dr. Forward’s puts it, “…even the most optimistic calculations indicate that very large devices will be required to create usable gravitational forces. Antigravity…like all modern sciences will require special projects involving large sums of money, men and energy.” Dr. Forward has also written a number of other articles, at: http://www.whidbey.com/forward/TechPubs.html

The TR3-B was spotted by U.S. military personnel as the “initial penetration bomb delivery vehicle” prior to the follow-up work done by the F-117 stealth fighters and B-2B bomber mop-up crews and not what Americans were told by the media were the initial strike penetration vehicles used during the Persian Gulf War air strikes inside Iraq airspace.

Recent sightings in December of 2000 has the TR3-B flying south from Utah across the mountain regions near a National Monument valley of Taos, New Mexico. For more information and images, See, e.g. ” UFO & ETT Pics ” on this X-CIA Files website.

Aerial vehicles, such as this ( below ), appears to have been around for decades

[ photo ( above ) VRIL VII Manned Combat Aerial Vehicle ( MCAV ) Circa: 1941 ( East Germany ) ]

November 9, 2001

The Old Technology

Reverse-engineered information production, although rarely used due to time consumption, may be utilized to uncover data previously gathered but somehow overlooked is a methodology rarely used by some of the most sophisticated intelligence gathering agencies in the World.

The concept is not ‘new’ but is the most feared form of information assimilation that tears down the walls of classification and censorship as to how things are or were previously perceived as fundamental beliefs. Little known and forgotten about practices and methodologies can easily circumvent new technologies by using the “Old Technology”.

An example of “Old Technology” was demonstrated in early-1970s era Soviet MIG defense aircraft using avionics that the West once saw used in 1950s-era television (TV) sets, i.e. electron vacuum tubes. During the 1970s, the West – sitting smug with transistorized miniature circuits in their fighter aircraft avionics – laughed at Soviet defense aircraft use of electron vacuum tube technology in these 1970s era advanced fighter jet aircraft being utterly outdated technology.

Years later, however the West discovered that Soviet defense aircraft avionics equipped with electron vacuum tube-type technology conquered EMP (elector-magnetic pulse) radiation (produced during above-ground Nuclear detonations) transistor micro circuit technology disruption. EMP wreaked havoc on Western ‘new technology’ electronics at great distances (sometimes, hundreds of miles) away from a ground zero nuclear blast or airburst electronic bombs or e-bombs.

Old “TV tube” technology was actually “higher technology” than what the West used for military defense fighter aircraft during the 1970s. The West was forced to develop other technology strategies, one of which was actually based on an even older technology, the Faraday Cage, a defense used against e-bombs.

That hurdle and yet another were to be eventually conquered, even better once again, with an ‘old technology school-of-thought’. The West found that the age-old adage of ‘fighting fire with fire’ would apply to protecting communication and electronic devices by, simply bombarding its “pre-production material’s structures” with gamma particle radiation. Thus the term, “rad-hard” or, “radiation hardened”, was coined.

In order to prevent EMP telecommunication disruption, fiber-optic communication cables using light-waves to transmit communications were bombarded (before installation) with ‘gamma radiation particles’ that provides a gamma-to-gamma resistance or vaccination against EMP telecommunication disruption, a “rad-hard technique” incorporated for years within U.S. defense Command, Control, Communication, and Computer Intelligence (C4I).

Old Technology, i.e. electron vacuum tubes, solid-state, analog, and arithmetic calculations, once thought to be old technology is still considered sensitive and continuing to be classified by the U.S. government. On the other hand, Extra-Terrestrial Technology (ETT) has brought unusual products into the consumer marketplace, which many take for granted. “Smart Structures” such as “Hybrid Composites” and “Smart Materials” such as “Shape Memory Alloys”, the later of which is now in the public domain and found in new technology ‘eyewear frames’ that will reshape back to their original form – after being bent out of shape – when water is poured over the frames.

Some believe that “cellphone technology” is a form of ETT, nevertheless more ETT products are coming our way, however what is “not” coming our way is the background information we should be focusing on, and for this reason, the following information is revealed.

Research and development on ETT materials was decompartmentalized into un-recognizable facilities after Area 51 began receiving so much publicity. Highly classified material and research projects began being conducted off government installations in universities and private firm research facilities around the world.

Initially, material pieces and sections of covertly held extra-terrestrial spacecraft began undergoing research studies at the Wright-Patterson Air Force Base General Electric Research and Development Division in Ohio, the Nellis Air Force Base test range sites at S-4 and Area 51 near Groom Lake in Nevada, and on a U.S. Army base in Dugway, Utah.

The U.S. Central Intelligence Agency (C.I.A.), U.S. National Security Agency (N.S.A.), and U.S. National Reconnaissance Office (N.R.O.) didn’t feel they had enough real estate space at Nellis Air Force Base (A.F.B.) to sufficiently “test” alien technology or ETT spacecraft insofaras their reverse-engineering programs went so, they went about an ingenious way of slowly but surely expanding their real estate property coverage area, which served to keep sensitive information even further hidden from the prying eyes of the outside World.

Additional funds were also used to bunkerize ETT-developed U.S. defense air, sea and even the new land-tunneling vehicular programs where most of the extremely sensitive and larger programs went literally underground from even the ever so watchful eyes of not only our own un-controllable and subverted satellites but, friendly foreigns as well as, enemy-based borns too. With everything in-place, the Government could move forward.

– – – –

Area 51 Law Suits – 1995 thru 2000

Lack of oversight creates opportunities for violations of environmental law to go undetected and unpunished. Some have charged that the Department of Defense, as recently as 1993, used secrecy as a cover for violations of environmental law. Recent lawsuits against the Department of Defense and the Environmental Protection Agency (EPA) allege that:

(1) Illegal open-air burning of toxic wastes took place at a secret Air Force facility near Groom Lake, Nevada; and,

(2) EPA has not exercised its required environmental oversight responsibilities for this facility.

Responding to the second (2nd) of these lawsuits, the EPA reported that in early 1995 it had seven (7) regulators on staff with Special Access [access to “Black Programs”] clearance that inspected the Groom Lake facility regarding “unknown health dangers” suffered by U.S. government contractors whom worked at the U.S. Nellis Air Force Base (AFB) experimental testing range near Groom Lake, Nevada known as “Area 51″.

The U.S. government would not release information to the victims or their professional advisors (medical and legal) as to the exact nature of “what” those workers had been exposed to, and consequently lawsuits were filed against the U.S. government.

What the public wasn’t aware of was, what occurred ‘before’ a U.S. Presidential directive in 2000.

What Happened In 1995:

In a federal 9th Circuit Court of Appeals case, these same injured government workers – from Area 51 – were battling for the release of U.S. government information. In one such instance, the U.S. Air Force base “Nellis AFB Nevada” claimed one (1) of their “manuals” – an “unclassified manual” in its entirety – should be considered a “classified” manual by the federal court. [It is suspected that because the particular “unclassified” manual was simply “found to be at Area 51” that it automatically became “classified” by reason of its “location”. See, e.g. Declaration of Sheila E. Widnall (below)]. Hence, the 9th circuit court of appeals ruled in favor of the government, and prohibiting government worker Plaintiffs from pursuing their case further through any other court.

Interesting information on this was revealed by, an official of the United States Air Force during the federal hearings. (See Immediately Below)

UNCLASSIFIED DECLARATION AND CLAIM OF MILITARY AND STATE SECRETS PRIVILEGE OF:

SHEILA E. WIDNALL, SECRETARY OF THE AIR FORCE

I, SHEILA E. WIDNALL, HEREBY DECLARE THE FOLLOWING TO BE TRUE AND CORRECT:

1. Official Duties: I am the Secretary of the United States Air Force and the head of the Department of the Air Force. In that capacity, I exercise the statutory functions specified in section 8013 of Title 10, U.S. Code. I am responsible for the formulation of Air Force policies and programs that are fully consistent with the national security directives of the President and the Secretary of Defense, including those that protect national security information relating to the defense and foreign relations of the United States. As the Secretary of the Air Force, I exercise authority over the operating location near Groom Lake, Nevada, and the information associated with that operating location. As the head of an agency with control over the information associated with the operating location near Groom Lake, I am the proper person to assert the military and state secrets privilege with regard to that information. Under Executive Order 12356, I exercise original TOP SECRET classification authority, which permits me to determine the proper classification of national security information on behalf of the United States. Executive Order No. 12356, Sec. 1.2, 47 Fed. Reg. 20,105 (1982), reprinted in 50 U.S. Code Section 401 (1991); Presidential Order of May 7, 1982, Officials Designated to Classify National Security Information, 50 U.S. Code Section 401 (1991).

2. Purpose: This Declaration is made for the purpose of advising the court of the national security interests in and the security classification of information that may be relevant to the above captioned lawsuits. The statements made herein are based on (a) my personal consideration of the matter, (b) my personal knowledge; and (c) my evaluation of information made available to me in my official capacity. I have concluded that release of certain information relevant to these lawsuits would necessitate disclosure of properly classified information about the Air Force operating location near Groom Lake, Nevada. I am satisfied that the information described in the classified Declaration is properly classified. I have further determined that the information described in the classified Declaration, if released to the public, could reasonably be expected to cause exceptionally grave damage to the national security. It is not possible to discuss publicly the majority of information at issue without risking the very harm to the national security that protection of the information is intended to prevent.

3. Security Classification: Under Information Security Oversight Office guidance, “[certain information that would otherwise be unclassified may require classification when combined or associated with other unclassified information.” (32 CFR 2001.3(a)) Protection through classification is required if the combination of unclassified items of information provides an added factor that warrants protection of the information taken as a whole. This theory of classification is commonly known as the mosaic or compilation theory. The mosaic theory of classification applies to some of the information associated with the operating location near Groom Lake. Although the operating location near Groom Lake has no official name, it is sometimes referred to by the name or names of programs that have been conducted there. The names of some programs are classified; all program names are classified when they are associated with the specific location or with other classified programs. Consequently, the release of any such names would disclose classified information.

4. National Security Information: As the head of the agency responsible for information regarding the operating location near Groom Lake, I have determined that information that concerns this operating location and that falls into any of the following categories, is validly classified:

a. Program(s) name(s); b. Mission(s); c. Capabilities; d. Military plans, weapons, or operations; e. Intelligence sources and methods; f. Scientific or technological matters; g. Certain physical characteristics; h. Budget, finance, and contracting relationships; i. Personnel matters; and, j. Security sensitive environmental data.

The following are examples of why certain environmental data is sensitive to the national security. Collection of information regarding the air, water, and soil is a classic foreign intelligence practice, because analysis of these samples can result in the identification of military operations and capabilities. The presence of certain chemicals or chemical compounds, either alone or in conjunction with other chemicals and compounds, can reveal military operational capabilities or the nature and scope of classified operations. Similarly, the absence of certain chemicals or chemical compounds can be used to rule out operations and capabilities. Revealing the composition of the chemical waste stream provides the same kind of exploitable information as does publishing a list of the chemicals used and consumed. Analysis of waste material can provide critical information on the makeup as well as the vulnerabilities of the material analyzed. Disclosure of such information increases the risk to the lives of United States personnel and decreases the probability of successful mission accomplishment.

5. Role of State and Federal-Environmental Agencies: Since 1990, appropriately cleared representatives of Nevada’s Department of Conservation and Natural Resources have been authorized access to the operating location near Groom Lake. The state representative’s role is and has been to monitor and enforce compliance with environmental laws and regulations and to advise on remedial efforts, if required. Appropriately cleared officers of the U.S. Environmental Protection Agency were recently granted access to the operating location near Groom Lake for inspection and enforcement of environmental laws. Federal inspectors from the Environmental Protection Agency commenced an inspection pursuant to the Solid Waste Disposal Act, commonly referred to as a “RCRA inspection,” at the operating location near Groom Lake, Nevada on December 6, 1994.

[EDITOR’S NOTE: Groom Lake: On May 19, 1995, the Director of the FFEO and the Deputy Assistant Secretary of the U.S. Air Force signed a memorandum of agreement ensuring that EPA has continued access to the operating location near Groom Lake for administering environmental laws. Moreover, due to national security concerns, the Air Force agreed to provide reasonable logistical assistance to EPA. Finally, EPA agreed that any classified information obtained by EPA would be treated in accordance with applicable laws and executive orders regarding classified materials.]

The Air Force has taken these steps to ensure full compliance with all applicable environmental laws. At the same time that the operating location near Groom Lake is being inspected for environmental compliance, it is essential to the national security that steps also be taken to prevent the disclosure of classified information.

6. Invoking Military and State Secrets Privilege: It is my judgment, after personal consideration of the matter, that the national security information described in this Declaration and in the classified Declaration, concerning activities at the U.S. Air Force operating location near Groom Lake, Nevada, constitutes military and state secrets. As a result, disclosure of this information in documentary or testimonial evidence must be barred in the interests of national security of the United States. Pursuant to the authority vested in me as Secretary of the Air Force, I hereby invoke a formal claim of military and state secrets privilege with respect to the disclosure of the national security information listed in paragraph four of this Declaration and more fully discussed in the classified Declaration, whether through documentary or testimonial evidence.

7. Environmental Compliance: Although I have found it necessary to invoke the military and state secrets privilege, I believe it important to comment on the Air Force’s commitment to full compliance with the environmental laws of the United States. Our goal is to be the best possible environmental steward of the lands comprising the Nellis Range. To meet that goal we are cooperating and will continue to cooperate with both federal and state environmental agencies.

8. Under penalty of perjury, and pursuant to section 1746 of Title 28, U.S. Code, I certify and declare that the foregoing statements are true and correct.

Executed this 21st day of February 1995 at Arlington, Virginia.

Sheila E. Widnall Secretary of the Air Force

What Happened In 2000:

The outcome – five (5) years later – was that before President Clinton left office, he signed a document sealing the lid on the secret, once and for all, which also sealed the U.S. Government workers fate. Interests of “national security” were cited.

Below, is an exact copy of the document signed by the U.S. President.

THE WHITE HOUSE Office of the Press Secretary

For Immediate Release February 1, 2000

TO THE CONGRESS OF THE UNITED STATES:

Consistent with section 6001(a) of the Resource Conservation and Recovery Act (RCRA) (the “Act”), as amended, 42 U.S.C. 6961(a), notification is hereby given that on September 20, 1999, I issued Presidential Determination 99-37 (copy enclosed) and thereby exercised the authority to grant certain exemptions under section 6001(a) of the Act.

Presidential Determination 99-37 exempted the United States Air Force’s operating location near Groom Lake, Nevada from any Federal, State, interstate, or local hazardous or solid waste laws that might require the disclosure of classified information concerning that operating location to unauthorized persons.

Information concerning activities at the operating location near Groom Lake has been properly determined to be classified, and its disclosure would be harmful to national security. Continued protection of this information is, therefore, in the paramount interest of the United States.

The determination was not intended to imply that in the absence of a Presidential exemption, RCRA or any other provision of law permits or requires the disclosure of classified information to unauthorized persons. The determination also was not intended to limit the applicability or enforcement of any requirement of law applicable to the Air Force’s operating location near Groom Lake except those provisions, if any, that would require the disclosure of classified information.

WILLIAM J. CLINTON

THE WHITE HOUSE,

January 31, 2000

– –

 

Submitted for review and commentary by,

 

Concept Activity Research Vault ( CARV ), Host
E-MAIL: ConceptActivityResearchVault@Gmail.Com
WWW: http://ConceptActivityResearchVault.WordPress.Com

 

Secret IT Directorate

CIA HQ former buildings

[ PHOTO ( above ): Former U.S. Central Intelligence Agency Headquarters ( click on image to greatly enlarge ) ]

Secret IT Directorate
by, Concept Activity Research Vault ( CARV )

November 22, 2011 11:45:08 ( PST ) Updated ( Originally Published: October 25, 2010 )

USA, Menlo Park, California – November 22, 2010 – Some may not recall the ‘first public announcement ( 2000 )’ of the United States Central Intelligence Agency ( CIA ) ‘private business corporation’ having been referred to as the IN-Q-TEL CORPORATION INTERFACE CENTER (aka) QIC, it was however fomerly known as IN-Q-TEL CORPORATION, but ‘that company name’ was even formerly known as IN-Q-IT CORPORATION (  ’not’ to be confused with the INTUIT CORPORATION business of software application programs QuickBooks and TurboTax ), however the CIA ’reversed’ its previous private business name change decisions back to it now being known as the IN-Q-IT CORPORATION ( IN-Q-IT ) today. Clear as mud, right?

Some wonder whether ”IN-Q-IT” is even ‘really’ the ’true name’ of this CIA ‘private business’ company today, or whether – within the intelligence community pea ‘n shell game of names – other company subsidiary names may have developed, but for now the IN-Q-IT CORPORATION is ‘currently known’ as being the U.S. Central Intelligence Agency ( CIA ) ‘venture capital’ private business corporation.

Important to understand precisely ‘what this CIA private business was supposed to be accomplishing’ versus ‘what the CIA actually did’ with its private business; more recently, however ’what it has become’ and ‘what it is supposed to be accomplishing’ today and for the future.

The curious state of affairs sees no one knowing anything more about the CIA QIC ( IN-Q-TEL Interface Center ) private business corporation than a few did when it began, but ‘now’ no one is even required to inform the public with an accounting to justify anything surrounding it. Why? Because it was meant to be a ‘private business’ company, ‘not’ a U.S. government entity, and ‘that’ was ‘how’ the CIA created it to remain – outside anyone’s purview – for a rather ‘complex’ reason.

The only method, by which an ‘even more complete’ and ‘even more accurate assessment’ may be formulated for an ’even more thorough understanding’ is quite involved and may at times be highly complex. One must not only review ‘multiple facet areas’ this CIA ‘private business company was originally designed to tackle but more-so what it was supposed to accomplish, and from within ‘both’ of those areas, go on to ’realize precisely’ what ‘were’ and ‘still are’ today’s “problem sets” facing the CIA and just ‘how’ they are juggling it all.

Some believe ‘members’ of U.S. Congressional committees’ and subcommittees’ ‘oversight’ had to attend ‘special educational lessons’ designed by the CIA. Did key members of Congress attend what basically amounted to a CIA ‘school’?

The CIA Congressional school was believed non-existent by many left to see other less palatable theories develop into the U.S. Congress having simply tired over too numerous CIA complex oversight reviews – and so much so that Congress relagated its own authority over to CIA in what some believed tantamount to the CIA ‘fox’ guarding its own global-sized intelligence ‘chicken coop’.

Some may now be enlightened to understand what the United States Central Intelligence Agency ( CIA ) decidely phrased as its own ”radical departure” away-from what it perceived as ‘inefficient economic budget support’ for solving its own ’quantum complexities’ within ’highly specific areas’ – still ‘classified’ in the interest of ‘national security’ – burdensomely producing an exponential growth of new ”problem sets” the CIA would only publicly explain – in the most general of terms – as such ‘experienced from within un-named areas’ of ‘science and technology research and development applications’ that the CIA was decidely viewing to establish ‘limits upon’ and go on to a’simultaneously’ establish as ‘marketable derivatives’ it called ”products” that CIA Office of Science and Technology ( S&T ) oversight could ‘manage distribution of information knowledge’ from but on an ‘in-exchange’ contractual agreement basis with ‘cooperative’ “private sector” ‘individuals’, ‘businesses’,  ’institutions’ and ‘organizations’ and thereby ‘establish who held proprietary keys’ to ‘special skill sets’ of what was already protected ’intellectual property rights’ of “existing” technology and CIA global establishment over all ”emerging” ( new upcoming future ) technology proprietary rights by sole marketeering ‘special talents’ and ‘special services’ could be harvested where incredible amounts of ‘profit’ could also be harnessed ( absorbed ) by the CIA.

To many of entrepeneurial independent spirit this CIA QIC private business corporation appeared, in-essence, out-of nowhere, like a new Borg structure infringing on private freedoms of what few once experienced of global marketplace past, and to others CIA QIC tenor was too Godfather-like – making people and entities an offer they couldn’t refuse. While the CIA foresaw such rumors and speculation coming,  in reality, what ‘was’ the ‘CIA’ doing by opening-up ‘its own private business corporation’?

Visionary dreams may be able to see the United States Central Intelligence Agency ( CIA ) ‘shed’ its ’government skin’ to become the ‘world’s largest multi-national corporation’ holding the ‘world’s largest monopoly’ on information technology ( IT ) research and development direction of much of the world’s finest talent resources, i.e. private ‘individuals’, ‘businesses’, ‘institutions’ and ‘organizations’ independently operating outside U.S. Congress ‘oversight, budget justification and related constraints’. Such clever restructuring in-place, CIA would cease to exist as the public knows it today, technically – by legal definition – becoming a wholly-owned ’non-profit organization’ – no longer requiring U.S. Department of the Treasury tax dollar funding. Set free, a new type of CIA would exist with ‘self-determined financing’ and stock market trading profits derived from a host of private sector corporate ‘mergers and acquisitions’ ( M&A ).

While surface dreams of such visionaries might at-first appear ingenious ‘how’ was ‘all’ this ‘actually assembled’?

Before 1999, it took the CIA Office of Legal Counsel less than 1-year to research various United States laws to locate legacy technicality provisions that the U. S. Congress approved allowing the CIA to exercise its own ”radical departure” plan.

By 2000, ’reality’ saw the fetal stages of this CIA private bussiness venture plan developing, leaving the public without hearing anymore further about its progress.

Some believed an ‘initial public offering’ ( IPO ) paying dividends to private individual investors and corporate trading of shares of ‘stock’ in what could have been misconstrued as potentially being the world’s largest ‘insider trading scheme’ headache of the United States Securities and Exchange Commission ( SEC ) whose predictives could only imagine manage the envelopment of multiple new technology area companies trading on ’stock exchange’ floors that could potentially carry forward ‘mutual profit secrets’ paying more funds than anticipated into the CIA private business plan – a “radical departure” away from what otherwise had long been understood as the status quo of world trading – where embarrassing implications might turn’terrorist fund reduction measures’ into ‘profits derived from CIA led secret private business developments in high technology products’. Could such a “radical departure” plan backfire or morphotherwise ’unsophisticated terrorists’ – utilizing improvised munition missions – into a new more powerful community of ’uncooperative competitive business terrorists’? Perhaps.

Implications of a CIA private business group of subsidiary businesses trading stock on ‘open stock market exchanges’ around the world could create an entirely ’new form of intelligence blowback’ of staggering global socio-economic business proportions for future generations.

Today, no overall clear pictures exist on what still remains cloaked in secrets – albeit ’government’ or ‘private’ – where outside both domains this CIA private business enterprise continues growing. But, in which directions?

Prior to 2001, the new CIA plan became a ’high-directional multi-tiered simulataneous growth-oriented economic support expansion’ for and of ‘key-critical secret-sensitive advanced “information technology” ( IT ) derivatives ( “products” )’ that could ‘only be implimented’ by the CIA “identifying” ( targeting ) and “partnering” ( obtaining ) ’exisiting information’ and ‘information tasking ability quotients’ from “global” ( worldwide ) “private sector” masses – an ”infinite” ( unlimited ) supply of ‘private individuals’, ‘private businesses’ and ‘institutions’ to become ‘dedicated taskers’ of controlled CIA “problem sets.”

The CIA private business futures would depend on successful simultaneous utilization of exercising better economic sense to its maximum potential immediately alongside highly specific advanced technological enhancements the United States ’intelligence community’ would be grown under a new ‘broad term secrecy’ commonly known but hidden within what the CIA termed only as ”information technology” ( IT ) that would necessarily require CIA controlled ’targeting’, ‘shaping’ and ‘acquisition’ of a plethora of private business sector information technology ( IT ) application research and development.

Truely a ”radical departure,” as the CIA publicly alluded to, when describing its ‘new mission focus’ – solving CIA “problem sets.”

Although CIA controlled ’special technology’ research and development ( R&D ) was on ‘applications’ that later becale known as ” Commercial Off The Shelf ” ( C.O.T.S. / COTS ) ‘products’ that were in-essence – during early stage informational development – the cruxt of what the CIA wanted presented on its ‘table’ whereupon the CIA would legitimatize and manage ‘mass information exchanges’ the CIA would ’trade’ for ‘other valuable considerations’ but to only a select few ‘private companies’ ( e.g. LOCKHEED, LUCENT, PHILIPS, AT&T, et al. ) that would in-return be ‘capable of offering’ through only ’United States government qualified’ contractual agreement exchanges of whatever the CIA deemed these companies ’could place of further interest’ or ’further the duration of continuing to provide’ what these select private ‘individuals’, ‘companies’, ‘institutions’ and ‘organizations’ were ‘already providing under U.S. government contract agreement harvests’.

The public, however only understood ‘press reports’ that kept all of the aforementioned very ’simple’ – indicating in the vaguest of terms – that ‘products would eventually be sold’ ”through the private sector.”

Secret-sensitive ‘products’, that were in all actuality ‘technological breakthroughs’ were to be traded between CIA selected and controlled business stock holdings, and the CIA IN-Q-TEL INTERFACE CENTER ( QIC ) would privately and thereby secretly manipulate all technology funds derived from what the CIA QIC publicly referred to as those being its ‘partners’ and ‘other vendors’ that would remain ’outside the purview U. S. Congress government budget oversight’ where all private companies remain to enjoy unfettered privledges of privacy.

By utilizing U.S. Department of the Treasury government tax funds – for U.S. government contract agreement funding to ‘private business partners’ – the United States Federal Reserve System follows in CIA footpath lock-step by ’mirroring’ private bank wire transfer monies directed and then redirected through a long chain series of foreign corporation named offshore bank accounts secretly routed back into the U.S. Central Intelligence Agency ( CIA ) IN-Q-IT CORPORATION (aka) QIC  private business enterprise handling ‘venture capital’ where new project funding amounts may be decidely broken down into smaller amounts or pooled into much larger amounts for dispersals to clandestine other secret-sensitive intelligence programs, projects and/or operations that gain the strength to easily remain ‘outside U.S. Congress intelligence oversight board committees and subcommittee scrutinization.

Directorate Central Intelligence ( DCI ) ochestral management arrangements within its IN-Q-IT INTERFACE CENTER ( QIC ) realizes from foreign historical prospectives that when a private business exercises ‘en masse privatized mind teams’ to understands today’s falabilities in-keeping with human frailties encountering CIA inherent procedural compartmentalization of secrets rule ( no “talking around” ) requiring those of such outside each task to be unable to quickly assemble an ‘overall picture’ of what overall CIA plans consist of – at least that’s how it’s supposed to work but rarely does – so, while that alone ( in and of itself ), became a “problem set” to solve that drastic measures needed taking. Hence, the CIA “radical departure” plan has another design serving to counteract intelligence information leaks.

Ingenious, is a very small word to describe even one ( 1 ) facet of this CIA private business plan where the public has its limited understanding confusingly ’shifted from what it perceives to be government secrets’ moved rapidly back and forth between ‘private sector secrets’ in what only but a few perceive to be a ‘new wave intelligence form’ or ‘combinatoric intelligence structuring’ producing a shield ( shell ) to protect even more secrets actually beneath what has become a new globally flexible CIA layered support group strengthening.

As an extremely large ’black budget’ intelligence missions funding source, CIA IN-Q-TEL INTERFACE CENTER would only be the recipient of limited and toned recommendations supplied by the QIC Board of Advisors ( CIA headquarters ) as measures to be reviewed for eventual implimentation by the QIC Board of Trustees for the essential discretionary manipulation of sophisticated:

1. Technologies [ i.e. XEROX PARC RESEARCH CENTER, et al. ];

2. Vendors [ i.e. LOCKHEED, et al. ];

2. Debt [ i.e. TELECREDIT INC., et al. ];

3. Capital [ i.e. MARSH & MCLENNAN CAPITAL INC. ];

4. Stock Market Trading [ i.e. GOLDMAN SACHS & CO. ]; plus,

5. More.

The report ( below ) shows ‘whom’ were initially placed in ‘experienced authoritative positions’ and shows ’whom’ were chosen as ’senior level executive advisors’, all selectively chosen by the CIA pulling them from ‘key critical private businesses’ to ‘guide’ the private U.S. Central Intelligence Agency business venture.

Such should really come as no surprise, at least to those understanding mechanics of international business, trading and finance where all domestic and foreign bank account transactions are mirrored under oversight by the United States Federal Reserve System ( FED ) and U.S. Securities And Exchange Commission ( SEC ), the latter two ( 2 ) of which are ‘overseen’ by the U.S. Central Intelligence Agency ( CIA ).

This information was dervived from, outside ’market sensitive‘ ( stock market trading ) material, a Critical Sensitive National Security report [ August 13, 2001 ] of the U.S. Congress, House of Representatives Subcommittee on Oversight and Investigations ( Subcommittee ) of the Committee on Financial Services relying on information supplied by, amongst others, the SEC Divisions of Enforcement and Corporation Finance, Offices of the Chief Accountant, General Counsel, Compliance, Inspections and Examinations, Office of the Comptroller, Office of Economic Analysis, the U.S. Central Intelligence Agency ( CIA ).

But, is all this ‘really going on’? See full report ( below ).

– – – –

Courtesy: Unwanted Publicity Information Group

Source: X-CIA Files [ now defunct MSN Group website ]

 

 

 

CIA Sends Hi-Tech Tsunami Warnings by, X-CIA Files – Staff Writer [ AnExCIA@bluewin.ch ]

March 12, 1999

USA, California, Menlo Park – The first ‘publicly open contract’ between a so-called ‘private firm’ partnering with the U.S. Central Intelligence Agency (CIA) where the private corporation whose CIA members conceived and funded it is known as the In-Q-Tel Corporation was formerly known as the In-Q-It Corporation.

In-Q-Tel is actually one of many CIA private-sector business partners, which is not all that uncommon a partnership, like what the CIA has had for decades with the MITRE CORPORATION (USA), BELL LABORATORIES ( Canada ), and TRW Power Thrusting Division ( Hawthorne, California, USA ), remote control center for CIA maneuvering Tracking Data and Relay Satellites ( TDRS ) and the Killer HUGHES ( KH-11 ) anti-satellite satellite ( space born destructive laser platform ) series.

The publicly revealed partnership between the CIA and IN-Q-TEL CORPORATION has sent another tsunami warning to Japan at its NIPPON ELECTRIC CORPORATION ( NEC ) high-technology monopoly, that has been right on the heels of advanced high-technology advancements seen within the RESEARCH TRIANGLE PARK ( North Carolina, USA ).

There is some doubt and controversy though between what the CIA says In-Q-Tel is, verses what Q-In-Tel says it is.

CIA claims ( on its website ) that Q-In-Tel is a “‘non-profit’ organization.”

IN-Q-TEL states it has had ’profitability in-mind for quite some time’.

Let’s see what the facts reveal (below), allowing casual observers to make up their own minds about just what IN-Q-TEL ‘really is’:

The In-Q-Tel Heirarchy

CIA IN-Q-TEL INTERFACE CENTER ( QIC ) players are stacked-up on a list that reads like something out-of a Robert Ludlum novel – filled with international intrigue high-tech corporatarchy.

CIA IN-Q-TEL INTERFACE CENTER ( QIC ) Board of Trustees is a Who’s-Who of ‘big corporate’ America:

– Gilman Louie, CEO of IN-Q-TEL CORPORATION, most recently was HASBRO INTERACTIVE Chief Creative Officer and General Manager of the GAMES.COM group ( responsible for creating the HASBRO internet game site ), previously serving as Chairman of the Board of MICROPOSE, CEO and Chairman of SPECTRUM HOLOBYTE, CEO of SPHERE INC., and is on the Boards of Directors of numerous software firms.;

– Lee Ault, Chairman, former Chairman and CEO of TELECREDIT INC.;

– Norman Augustine, former Chairman and CEO of LOCKHEED MARTIN CORPORATION;

– John Seely Brown, Chief Scientist, XEROX CORPORATION and President, XEROX PARC RESEARCH CENTER;

– Michael Crow, Executive Vice Provost of Columbia University;

– Stephen Friedman, Senior Principal of MARSH & MCLENNAN CAPITAL INC., and former Chairman of GOLDMAN SACHS AND CO.;

– Paul Kaminski, former U.S. Department Of Defense ( DoD ) Undersecretary for the U.S. Defense Acquisition and Technology Office, President and CEO of TECHNOVATIONS INC., and Senior Partner in GLOBAL TECHNOLOGY PARTNERS;

– Jeong Kim, President of CARRIER NETWORK, part of the LUCENT TECHNOLOGIES GROUP, and former founder of YURIE SYSTEMS;

– John McMahon, former Deputy Director of U.S. Central Intelligence Agency ( CIA ), former President and CEO of LOCKHEED MISSILE & SPACE COMPANY, and consultant to LOCKHEED-MARTIN CORPORATION;

– Alex Mandl, Chairman and CEO of TELIGENT, and former President and CEO of AT&T; and,

– William Perry, former U.S. Department Of Defense Secretary and currently Berberian Professor at Leiland Stanford University.

New CIA Use Of In-Q-Tel Interface Center ( QI.C )

In-Q-Tel is a new non-profit corporation funded by the CIA to seek Information Technology ( IT ) solutions to the Agency’s most critical needs. A unique venture, was formed to enable the Agency ( CIA ) to have access to ‘emerging and developing information technology’ in a timely manner.

QIC ( IN-Q-TEL INTERFACE CENTER ) is the ‘interface center’ linking the IN-Q-TEL CORPORATION to the Agency ( CIA ).

QIC – CIA Function

QIC develops a problem set for In-Q-Tel, partners with In-Q-Tel in the solution acceptance process and manages the Agency’s relationship with In-Q-Tel.

QIC plans and evaluates the partnership program, protects CIA security and CIA counter-intelligence interests and communicates the QIC / In-Q-Tel venture to the World.

CIA – QIC Function

The CIA, working in partnership with IN-Q-TEL, created the Agency’s ( CIA ) new found organization QIC.

QIC goals now, are to be the leading source for commercial, high impact IT solutions for the Agency ( CIA ), and will be herald as the single most important contributor to the Intelligence Community by the year 2001. QIC will create and use the full range of corporate processes needed to manage QIC (aka) the ” CIA-In-Q-Tel Partnership ” by delivering CIA-accepted IT solutions.

CIA Goals With QIC

Eventually, IN-Q-TEL will take on a life funded by the high-technology consumer public. QIC ( IN-Q-TEL Interface Center ) however, works comprehensively and collaboratively with Agency ( CIA ) IT specialists, customers, IN-Q-TEL experts, Agency ( CIA ) managers, the Chief Information Officer, the Chief Technology Officer, Chief Financial Officer, Agency directorates, and Executive Board to develop an annual coordinated and approved critical ‘problem set’ for IN-Q-TEL.

QIC, leads Agency participation in the partnership’s solution transfer planning, including resources, technology demonstration, and prototype testing and evaluation.

At the same time, QIC works with In-Q-Tel to assure that it addresses issues regarding the transfer of IT solutions into the Agency. QIC also works with Agency customers and their managers to create an environment conducive to the implementation and acceptance of partnership solutions and follow-on initiative.

In-Q-Tel Background

On September 29, 1999 the Central Intelligence Agency (CIA) was treated to something different. In many of the nation’s leading newspapers and television news programs a story line had appeared that complimented the Agency for its creativity and openness.

The media was drawn to a small corporation in Washington, D.C. that had just unveiled its existence and the hiring of its first CEO, Gilman Louie who described the Corporation called, the “IN-Q-IT CORPORATION“, as having been formed “…to ensure that the CIA remains at the cutting edge of information technology advances and capabilities.”

With that statement the Agency ( CIA ) launched a new era in ‘how it obtains cutting-edge technologies’.

In early January 2000, the name of the corporation ( IN-Q-IT CORPORATION ) was changed to IN-Q-TEL CORPORATION.

The ‘origins of the concept’ that has become IN-Q-TEL are traceable to Dr. Ruth David, former CIA Deputy Director for Science and Technology.

She and CIA Science And Technology Deputy Director, Joanne Isham, were the first senior Agency ( CIA ) officials to understand that the information revolution required the CIA to forge ‘new partnerships’ with the ‘private sector’ and ‘design a proposal for radical change’.

The timing of the proposal was fortuitous.

CIA Director of Central Intelligence ( DCI ), Mr. George Tenet, had just launched his own Strategic Direction Initiative ( SDI) – also known as “Star Wars“ – included technology as one of its areas for review.

The study made a direct link between Agency ( CIA ) ‘future technology investments’ and ‘improving’ its ‘information gathering’ and ‘analysis capabilities’.

By the summer of 1998, the Agency ( CIA ) had assembled a few senior Agency ( CIA ) ‘staff employees with an entrepreneurial bent’ and ‘empowered them’ to take the Dr. Ruth David original concept and flesh it out.

Aided by a ‘consulting group’ and a ‘law firm’, they ( CIA ) devoted the next 4-months to making the rounds in Silicon Valley ( California ) – and elsewhere – putting the concept through the wringer. Much of the ‘time was spent listening’.

Many they met with were often critical of one aspect or another of the concept.

But, whether they were ‘venture capitalists’, Chief Executive Officers ( CEO ), Chief Technical Officers ( CTO ) or men of Congress and staffers, all eagerly immersed themselves in spirited debates that enriched the Agency ( CIA ) team and ‘drove the concept in new directions’.

By the end of 1998, the Agency  CIA ) team reached a point at which the concept seemed about right.

Though’ it had changed considerably’ from that which had been proposed initially by Dr. Ruth David, it remained true to its core principles.

It was time to hand the ‘product’ of the Agency ( CIA ) work over to someone in the ‘private sector’ with the ‘experience’ and passion necessary ‘to start the Corporation’.

To the delight of the DCI and Agency ( CIA ) team, Norman Augustine, a former CEO of LOCKHEED-MARTIN and 4-time recipient of the Department of Defense highest civilian award, the Distinguished Service Medal, accepted the challenge.

By February 1999, the Corporation was established as a legal entity, and in March [ 1999 ] it [ IN-Q-TEL CORPORATION ] received its first [ 1st ] contract from the Agency ( CIA ). In-Q-Tel was in business, charged with ‘accessing information technology ( IT ) expertise and technology wherever it existed’ and brought it to bear on the’ information management’ challenges facing the Agency ( CIA ).

In-Q-Tel Creation

As an information based agency, the CIA must be at the cutting edge of information technology in order to maintain its competitive edge and provide its customers with intelligence that is both timely and relevant.

Many times the Agency and the federal government have been the catalysts for technological innovations. Examples of Agency ( CIA ) inspired breakthroughs, include the LOCKHEED AEROSPACE aircraft designed U-2 ( Dragon Lady ) and SR-71 ( Black Bird )reconnaissance aircraft and the CORONA ‘surveillance’ satellites, while the ‘parent of the Internet’ [ Advanced Research Projects Agency (aka) ARPA.NET ] was led forward with the Defense Advanced Research Projects Agency ( DARPA ).

By the 1990s, however – especially with the advent of the World Wide Web – the ‘commercial market’ was setting the pace in IT innovation.

And, as is the nature of a market-based economy, the ‘flow of capital’ and ‘talent’ has irresistibly ‘moved to the commercial sector’ where the prospect of huge profits from ‘initial public offerings‘ ( IPO ) and ‘equity-based compensation‘ has become ‘the norm’.

In contrast to the remarkable transformations taking place in Silicon Valley ( California ) and elsewhere, the Agency ( CIA ) – like many large Cold War era ‘private sector corporations’ – felt itself being ‘left behind’. It ( CIA ) was not connected to the creative forces that underpin the digital economy.

And, of equal importance, many in Silicon Valley ( California ) knew little about the Agency ( CIA ) IT ( information technology ) needs.

The opportunities and challenges posed by the information revolution to the Agency ( CIA ) core mission areas of ‘clandestine collection’ and ‘all-source analysis’ were growing daily.

Moreover, the [ CIA ] challenges are not merely from foreign countries, but also ‘transnational threats’.

Faced with these realities [ by 1997 ], the leadership of the CIA made a critical and strategic decision in early 1998.

The Agency’s leadership recognized that the CIA did not, and could not, compete for IT ( information technology ) innovation and talent with the same speed and agility that those in the ‘commercial marketplace’, whose businesses are driven by “Internet time” and ‘profit’, could.

The CIA mission ‘was’ intelligence collection and analysis, not IT innovation.

The leadership also understood that, in order to extend its reach and access a broad network of IT innovators, the Agency had to step outside of itself and appear not just as a buyer of IT but also as a seller.

The CIA had to offer Silicon Valley ( County of Santa Clara, California ) something of value, a business model that the Valley [ Silicon Valley ] understood; a model that ‘provides’ – for those who joined hands ( became partner affiliates ) with IN-Q-TEL – the ‘opportunity to commercialize’ their ‘innovations’. In addition, IN-Q-TEL ‘partner companies’ would also ‘gain another valuable asset’, access to very difficult CIA ‘problem sets’ that could become ‘market drivers’.

Once the Agency ( CIA ) leadership crossed these critical decision points, the path leading to IN-Q-TEL formation was clear.

In-Q-Tel – Close-Up

In-Q-Tel founder, Norm Augustine, established it as an independent non-profit corporation.

Its Board of Trustees, which now has 10 members, functions as any other board, initially guiding and overseeing the Corporation’s startup activities and setting its strategic direction and policies.

The CEO, who was ‘recruited’ by the Board [ Board of Trustees for IN-Q-TEL CORPORATION ], reports to them [ Board of Trustees for IN-Q-TEL CORPORATION ], and ‘manages’ IN-Q-TEL.

The Corporation [ IN-Q-TEL ] has offices in two ( 2 ) locations:

1. Washington, D.C.; and,

2. Menlo Park, California [ Silicon Valley ].

It [ IN-Q-TEL ] employs a ‘small professional staff’ and a ‘smaller group’ of ‘business consultants’ and ‘technology consultants’.

In-Q-Tel’s mission is to foster the development of new and ‘emerging information technologies’ and pursue ‘research and development’ ( R&D ) that produce solutions to some of the most difficult IT [ information technology ] problems facing the CIA.

To accomplish this, the Corporation [ IN-Q-TEL ] will network extensively with those in ‘industry’, the ‘venture capital’ community, academia, and any ‘others’ who are at the ‘forefront of IT [ information technology ] innovation’.

Through the business relationships that it establishes, In-Q-Tel will create environments for collaboration, product demonstration, prototyping, and evaluation.

From these activities will flow the IT solutions that the Agency ( CIA ) seeks and, ‘most importantly’, the ‘commercial opportunities’ for ‘product development’ by its ‘partners’.

To fulfill its mission, In-Q-Tel has designed itself to be:

– Agile to respond, rapidly to Agency needs and commercial imperatives;

– Problem driven, to link its work to Agency program managers;

– Solutions focused, to improve the Agency’s capabilities;

– Team oriented, to bring diverse participation and synergy to projects;

– Technology aware, to identify, leverage, and integrate existing products and solutions;

– Output measured, to produce quantifiable results;

– Innovative, to reach beyond the existing state-of-the-art in IT; and,

– Over time, self-sustaining, to reduce its reliance on CIA funding.

At its core, In-Q-Tel is designed to operate in the market place on an equal footing with its commercial peers and with the speed and agility that the IT world demands.

As an example, it [ IN-Q-TEL ] can ‘effect the full range of business transactions‘ common to the industry – it is ‘venture [ venture capital ] enabled’, can ‘establish joint ventures‘, ‘fund grants [ grant funding ]‘, sponsor open competitions, ‘award sole source contracts‘, etc. And, ‘because of the many degrees of freedom granted to it‘ [ IN-Q-TEL ] by the Agency ( CIA ), IN-Q-TEL ‘does not require Agency ( CIA ) approval for business deals it negotiates‘.

As such, In-Q-Tel represents a different approach to government R&D.

It [ IN-Q-TEL ] ‘moves away from the more traditional government project’ office model in which the program is managed by the government.

Instead, the Agency ( CIA ) has invested much of the decision-making in the Corporation [ IN-Q-TEL ].

Hence, In-Q-Tel will be judged on the outcomes produced, i.e. the solutions generated, and not by the many decisions it makes along the way.

In-Q-Tel – IT Space

As with many aspects of the In-Q-Tel venture, the Agency took a different approach in presenting its IT needs to the Corporation. It bounded the types of work that In-Q-Tel would perform – its IT operating “space” – by two ( 2 ) criteria:

In the first [ 1st ] instance, it made the decision that In-Q-Tel would initially conduct only unclassified IT work for the Agency ( CIA ).

Second [ 2nd ], to attract the interests of the private sector, it recognized that IN-Q-TEL would ‘principally invest in areas’ where there was both an Agency ( CIA ) need and ‘private sector interest’.

Whereas in the past, much of the commercial computing world did not focus on those technologies useful to the CIA, the intersection zone between intelligence and private sector IT needs has grown tremendously in recent years.

Many of the underlying technologies that are driving the information revolution are now directly applicable to the intelligence business. Examples of commercial applications that also support intelligence functions are:

1. Data warehousing and data mining; 2. Knowledge management; 3. Profiling search agents [ Search Engines and User search requests ]; 4. Geographic information systems [ Satellite Communication Information Systems ]; 5. Imagery analysis and pattern recognition; 6. Statistical data analysis tools; 7. Language translation; 8. Targeted information systems; 9. Mobile computing; and, 10. Secure computing.

Information Security or INFOSEC, a critical enabling technology for all intelligence information systems, is now a mainstream area of research and innovation in the commercial world, due in no small part to the exponential growth in Internet e-commerce.

Thus, there are a number of commercially available security technologies:

1. Strong encryption; 2. Secure community of interests; 3. Authentication and access control; 4. Auditing and reporting; 5. Data integrity; 6. Digital signatures; 7. Centralized security administration; 8. Remote users or traveling users; and, 9. Unitary log-in.

It is, no doubt, the case that the commercial investments flowing into information security outpace the spending made by the Intelligence Community.

Thus, In-Q-Tel will be poised to ‘leverage the investments of others to the benefit of the Agency ( CIA )‘.

Having bounded In-Q-Tel’s IT space with these 2 criteria – ‘unclassified work’ with ‘commercial potential’ – the Agency defined a set of strategic problem areas for the Corporation.

These four ( 4 ) areas have the added and obvious benefit of spanning the needs of all the Agency’s directorates and, hence, its core business functions of collection and analysis:

1. Information Security: Hardening, and intrusion detection, monitoring and profiling of information use and misuse, and network and data protection.;

2. Use of the Internet: Secure receipt of information, non-observable surfing, authentication, content verification, and hacker resistance.;

3. Distributed Architectures: Methods to interface with custom / legacy systems, mechanisms to allow dissimilar applications to interact, automatic handling of archived data, and connectivity across a wide range of environments.; and,

4. Knowledge Generation: Geospatial data fusion and multimedia data fusion or integration and, computer forensics.

Information Technology ( IT ) In-Q-Tel – CIA Occupancy

It will no doubt raise questions with some who will believe that it or the Agency ( CIA ) have other motives.

It is, therefore, important to highlight ‘what In-Q-Tel is not’ and what it [ IN-Q-TEL ] will not do.

First, it is not a front company for the Agency ( CIA ) to conduct any activities other than those spelled out in its Articles of Incorporation and its Charter Agreement.

As a non-profit – 501(c)3 – corporation, it will operate in full compliance with the Internal Revenue Service ( IRS ) regulations and, as with all similar non-profits, its IRS filing will be a matter public record.

In-Q-Tel is ‘openly affiliated with the Agency ( CIA )’, as was made obvious to the world during its press rollout on September 29, 1999.

Of equal importance, it will not initiate work in areas that lead to solutions that are put into so-called “black boxes” – that is, innovations that the government subsequently classifies. To do so would undercut In-Q-Tel credibility with its business partners to the detriment of the Agency.

Finally, IN-Q-TELis a solutions company‘, ‘not a product company‘.

Working through its business partners, it will demonstrate solutions to Agency problems but will not generate products for use by Agency components.

In-Q-Tel ‘inspired products‘ will be ‘developed through separate contractual arrangements‘ involving Agency ( CIA ) ‘components‘ and ‘other vendors‘.

In-Q-Tel – Structure & Staffing

Central to the In-Q-Tel business model are speed, agility, market positioning, and leveraging.

These attributes, taken together, have helped shape the evolving structure of the Corporation. It is one that intends to emphasize the “virtual” nature of the Corporation while minimizing “brick and mortar” costs, i.e. it will operate by facilitating data sharing, and decisionmaking via seamless communications using a private network with broadband connectivity to the Agency and its partners, while limiting direct infrastructure investments in laboratories and related facilities by leveraging the facilities of others.

To facilitate this intent, the In-Q-Tel Board and CEO decided to hire a small staff composed of people with strong technical and business skills.

At present, the Corporation has about ten ( 10 ) staff employees and, it is expected that, by the end of the year 2000, the total will number about thirty ( 30 ).

The CEO is currently designing In-Q-Tel management structure, but the parameters he has set for it indicate that it will be very flat and aligned for rapid decision-making.

How In-Q-Tel Works

One of the great leaps of faith the Agency took in this venture was to recognize, early on, that private sector businessmen were better equipped than it was to design the Corporation and create its work program.

The Agency’s critical role was to develop the initial concept, help form the best Board possible, give IN-Q-TEL a challenging problem set, and then design a ‘contractual vehicle‘ that ‘gave‘ the [ CIA ] CorporationQIC ] the ‘necessary degrees of freedom to design itself and operate in the market place‘.

All of this was accomplished in less than 1-year, to include the design of In-Q-Tel’s initial work program. In-Q-Tel’s current work program is built on a process of discrete, yet overlapping, elements – IT roadmapping, IT baselining, and R&D projects.

The underlying philosophy now driving the In-Q-Tel program is to gain an understanding of the many players occupying In-Q-Tel’s IT space – by roadmap analysis – and, concurrently, test and validate the performance and utility of existing products and technologies – by baseline testing – against current Agency needs.

If the test results are successful, the Agency ( CIA ) has the ‘option’ of quickly ‘purchasing’ the ‘products’ directly ‘from the vendor’.

However, in those ‘cases where there are no existing products or technologies‘, or where a gap exists between the baseline test results and the Agency ( CIA ) needs, IN-Q-TEL will launch R&D projects.

In this way, the Agency ( CIA ) obtains near-term solutions through the evaluation of those products considered “best-in-class” and can target its R&D projects more precisely – that-is, to where ‘commercial‘ or ‘other government [ contract ] IT investments [ $ ] are small‘ or nonexistent.

With its first [ 1st ] year budget of about $28,000,000 million, In-Q-Tel has focused its initial efforts on the IT roadmap and baseline elements of the program.

The roadmap project seeks, first, to ‘identify those in industry, government, and academia who occupy the same IT space as In-Q-Tel’ and, secondarily, to ‘spot existing technologies of potential interest’.

The results will also help In-Q-Tel leverage the technical advances made by others, assess the overall direction and pace of research, avoid duplicating work done by other government entities, and highlight [ identify and target ] potential business partners. The roadmap will be updated and refined by In-Q-Tel throughout the life of its work program.

Two ( 2 ) Team Incubators & Twenty ( 20 ) Hi-Tech Firms [ Businesses ]

These twenty (20) are executing the baseline-testing element of the In-Q-Tel work program. They were selected by an independent review panel of national IT experts convened by In-Q-Tel to evaluate multiple proposals.

Each of the two ( 2 ) teams is working on one ( 1 ) or more ‘incubator concepts’ derived by In-Q-Tel from the Agency ( CIA ) ‘problem set‘ enumerated above. The incubator teams will operate for over a [ 1 ] year. As the In-Q-Tel work program grows, it is possible that other baseline incubator teams will be established.

The R&D part of the program, which In-Q-Tel manages, will soon become the core of its activities, with a growing percentage of its funds directed towards a portfolio of research projects. In-Q-Tel is formulating its research thrusts based on the information and test results gathered under the roadmap and baseline work, aided by extensive interactions with the private sector and the Agency.

The design of the research projects will be set by In-Q-Tel and will vary to meet the mutual interests of the Agency ( CIA ), In-Q-Tel, and its prospective business partners.

As mentioned earlier, In-Q-Tel will ‘draw from a broad range of R&D competition‘ models to attract the business partners it seeks.

In some cases, it may assemble teams of companies that each has a necessary part, but not the whole, of the solution In-Q-Tel seeks.

In ‘other projects’ IN-Q-TEL might be a co-investor in a fledgling company with another business partner such as a venture capital firm.

Or, it could take a more traditional route, using a request for proposal.

In essence, In-Q-Tel will use whatever model most efficiently and effectively meets the needs of all parties to a transaction, with a constant eye towards leveraging its resources and solving the Agency’s IT needs.

Common to most or all of the R&D agreements that In-Q-Tel intends to use will be the subject of intellectual property (IP), or more precisely said, the ownership of IP and the allocation of IP generated revenues.

In the area of IT R&D, a deal is typically not struck until all of the parties’ IP rights are clearly established.

In-Q-Tel’s acceptance within the IT market place depends heavily on its ability to negotiate industry standard IP terms.

Recognizing this, the Agency ( CIA ) agreement with In-Q-Tel allows it and/or its partners to retain title to the innovations created and freely negotiate the allocation of IP derived revenues.

The only major stipulation is that the Agency ( CIA ) retain traditional “government purpose rights” to the ‘innovations‘.

Contract Model – In-Q-Tel

Before the partnership between In-Q-Tel and the Agency became a reality, the Agency ( CIA ) had to develop a new contract vehicle that granted the Corporation [ QIC ] the degrees of freedom it needed to operate in the market place.

Most Agency ( CIA ) contracts, including those in R&D, are based on the Federal Acquisition Regulations ( FAR ), however FAR is often viewed by industry as overly burdensome and inflexible. And, it has been the U.S. Department of Defense ( DoD ) experience that smaller companies often will not contract with the government because of the extra costs they would incur to be FAR compliant.

Because the Agency ( CIA ) wanted to encourage such companies to work with In-Q-Tel, it took a different approach and designed a non-FAR agreement with the IN-Q-TEL CORPORATION.

It [ CIA ] also adopted elements from the old internet Godfather, i.e. Advanced Research Projects Agency or ARPA and its model based on “Other Transactions ( OT )” authority granted to the DoD [ U.S. Department of Defense ] by Congress [ U.S. Congress ].

OT [ Other Transaction ] agreements ‘permit authorized government agencies’ [ e.g. CIA ] to design R&D agreements outside the FAR.

The hoped for result is to spur greater flexibility and innovation for the government. In addition, it permits well-managed businesses, large and small, to perform R&D for the government, using their existing business practices and procedures.

Using an ARPA model OT agreement as a guide, the Agency ( CIA ) designed a 5-year Charter Agreement that describes the broad framework for its relationship with IN-Q-TEL, sets forth general policies, and establishes the terms and conditions that will apply to future contracts. In addition, a short-term funding contract was negotiated that includes In-Q-Tel’s “Description of Work”.

Together these documents define the metes and bounds of the Agency ( CIA ) relationship with In-Q-Tel and permit IN-Q-TEL to negotiate agreements with its partners, absent [ without ] most government flow down requirements.

In-Q-Tel – Advancements

The In-Q-Tel venture is one that has challenged the Agency to think creatively and quickly to address the fundamental changes that the information revolution is having on its core business.

It responded by setting aside traditional policies and practices in many areas and established a new partnership with industry and academia, based on shared interest and mutual benefit.

But, one cannot ignore that this venture involves risk, both for the Agency and In-Q-Tel. From the Agency’s perspective there are three ( 3 ) major areas that will require constant attention:

1. Managing its relationship with IN-Q-TEL;

2. Solution transfer; and,

3. Security.

Perhaps the most important of the three is the first, managing the relationship without stifling In-Q-Tel’s competitive edge.

IN-Q-TEL is a small independent corporation ‘established to improve the mission performance of a much larger government Agency‘. [ ? National Secruity Agency ( NSA ) ? ]

The imperatives that led to In-Q-Tel have many parallels in industry. In fact, the IT sector is replete with examples of a large corporation seeking to improve its competitiveness by either purchasing a small start-up company or forming a subsidiary.

The ‘parent corporation‘ [ ? ] sees in ‘its offspring‘ traits that it no longer possesses – speed, agility, and expertise. But, for these traits to be realized, ‘the start-up‘ must operate unencumbered from the ‘parent corporation‘ [ ? ], whose natural tendency is to rein in and control it.

Similarly, the Agency ( CIA ) will have to restrain its natural inclination to micromanage IN-Q-TEL and, instead, allow the Corporation [ QIC ] the freedom to prosper. It must have continuous insight into In-Q-Tel’s activities, but must understand that In-Q-Tel is responsible for its own operations, including the design and management of the work program.

Acceptance by Agency ( CIA ) components of In-Q-Tel inspired solutions will be the most important measure of success in this venture. It is also likely to be the hardest.

While there is every expectation that In-Q-Tel will become commercially successful and seed innovative solutions, if they are not accepted and used by Agency line managers, then the overall venture will be judged a failure.

Although In-Q-Tel has a critical role in the solution transfer process, the burden rests with the Agency, since the challenges are as much managerial and cultural as they are technical.

The Agency ( CIA ) Chief Information Officer ( CIO ), directorate heads, and component directors will all have to work closely with IN-Q-TEL to overcome bureaucratic inertia and identify eager recipients of the innovations that the Corporation develops.

Agency ( CIA ) “product champions” for each IN-Q-TEL project should be identified early and should participate fully in its formulation, testing, and evaluation. Incentives should be considered for those Agency ( CIA ) components that commit to projects with unique risks or that require extensive personnel commitments.

These and other strategies will be employed to ensure that the return on the Agency’s investment in In-Q-Tel translates into measurable improvements in its mission performance.

The open affiliation between the CIA and In-Q-Tel is yet another unique aspect and challenge for this venture. Although the Corporation [ QIC ]will be doing only unclassified work for the Agency ( CIA ), the nature of its IT research and its association with a US intelligence agency will undoubtedly attract the interests of foreign persons, some with questionable motives.

The obvious security ramifications of this scenario were well considered in the decisionmaking process that led to In-Q-Tel’s formation. It was ultimately decided that the risks are manageable and, in many ways, are similar to those faced by any high-tech company trying to protect its IP and trade secrets.

IN-Q-TEL and the Agency ( CIA ) will be working closely to ensure that the Corporation [ QIC ] operates with a high degree of security awareness and support.

In-Q-Tel has a critical role in meeting these three ( 3 ) challenges. However, it’s most persistent challenge will be developing and sustaining a reputation as a business that:

1. sponsors leading edge research;

2. produces discoveries; that can be,

3. profitably commercialized.

Once it has established a record of accomplishment in these two areas, the high caliber IT talent the Agency hopes to reach through In-Q-Tel will be drawn to the Corporation.

In-Q-Tel Future

Those of us at the Agency who helped to create In-Q-Tel are endlessly optimistic about its prospects for success. The early indicators are all positive. Among them is the caliber of the people who stand behind and lead the Corporation and the initial reaction from industry and the trade press to its formation.

IN-Q-TEL Board of Trustees is at least the equal of any large corporation’s board. They are committed to the Agency ( CIA ) mission, the new R&D model that IN-Q-TEL represents, and have invested much of their time to its formation.

The Agency and the nation are in their debt.

The Board [ IN-Q-TEL Board of Trustees ] also recruited an outstanding CEO who brings with him the ‘experiences’ and ‘contacts’ of his Silicon Valley [ California ] base and an established reputation for starting and growing new IT companies.

The favorable press coverage of In-Q-Tel combined with the industry “buzz” engendered by the Board and CEO have brought a flood of inquiries by those interested in doing business with the Corporation. And, most importantly, its work program is already beginning to achieve results that the Agency ( CIA ) can use and that its ( CIA ) partners can commercialize.

Judging by the record to date, the road ahead appears promising. But, In-Q-Tel’s fate also rests in part on those institutions charged with oversight of the Agency and its budget.

Congress has supported the Agency as it launched this new venture. The U.S. Congress “seeded the venture with start-up funding” when it was still in its conceptual phase, but asked hard questions of the Agency throughout the design and formation of In-Q-Tel.

Members understood that starting an enterprise such as IN-Q-TEL is ‘not risk free‘. As with all R&D efforts in government and industry, there will be some home run successes but also some failures. That is the price the Agency must be prepared to pay if it wants to stay on the leading edge of the IT revolution.

With In-Q-Tel’s help plus the continued support of Congress [ U.S. Congress ] and Office of Management and Budget ( OMB ), as well as from the traditional Agency ( CIA ) ‘contractor community‘ and ‘others‘, an “e-CIA” of the next century [ 21st Century ] will evolve quickly, to the benefit of the President and the national security community.

Notes Of Interest

For the next one ( 1 ) or two ( 2 ) years [ 1999 – 2000 ], IN-Q-TEL will accept work only from the CIA‘.

All solutions that it provides to the CIA will be made available to the entire Intelligence Community.

Codified in a 5-year Charter Agreement with the CIA and a 1-year funding contract that is renewable annually. As stipulated in the Charter Agreement, “…the Federal Government shall have a nonexclusive, nontransferable, irrevocable, paid-up license to practice or have practiced for or on behalf of the United States the subject invention throughout the world for Government purposes”.

The Agency ( CIA ) component that has day-to-day responsibility for guiding the CIA relationship with IN-Q-TEL, including the ‘design and implementation of the contract’ and the ‘problem set’, is the IN-Q-TEL INTERFACE CENTER ( QIC ) which resides inside the CIA Directorate of Science and Technology.

Circa: 2002 – 2008

IN-Q-TEL INCORPORATED (aka) IN-Q-IT CORPORATION 2500 Sand Hill Road – Suite 113 Menlo Park, California 94025 – 7061 USA TEL: +1 (650) 234-8999 TEL: +1 (650) 234-8983 FAX: +1 (650) 234-8997 WWW: http://www.inqtel.com WWW: http://www.in-q-tel.org WWW: http://www.in-q-tel.com

IN-Q-TEL INCORPORATED (aka) IN-Q-IT CORPORATION P.O. Box 12407 Arlington, Virginia 22219 USA TEL: +1 (703) 248-3000 FAX: +1 (703) 248-3001

– –

IN-Q-TEL focus areas, surround:

– Physical Technologies; – Biological Technologies; – Security; and, – Software Infrastructure.

– –

IN-Q-TEL

Investments –

Strategic Investments, Targeted Returns

In-Q-Tel is ‘building’ a ‘portfolio of companies’ that are ‘developing innovative solutions’ in ‘key technology areas’

Similar to many ‘corporate strategic venture’ firms, In-Q-Tel seeks to ‘optimize potential returns’ for our clients — the CIA and the broader Intelligence Community— by investing in companies of strategic interest.

In-Q-Tel engages ‘start-ups’, ‘emerging’ and ‘established’ companies, universities and research labs.

In-Q-Tel structure attractive win-win relationships through ‘equity investments’, as well ‘strategic product development funding’, and ‘innovative intellectual property arrangements’ and ‘government business development guidance’.

An Enterprising Partner –

In-Q-Tel ‘portfolio companies’ value a ‘strategic relationship’ with a ‘proactive partner’.

Companies, that work through In-Q-Tel due diligence process, know their technologies have the potential to address the needs of one of the most discriminating enterprise customers in the world.

In-Q-Tel takes a hands-on approach, working closely with our ‘portfolio companies’ to help ‘drive their success’ in the ‘marketplace’ and to ‘mature [ ‘grow’ ] their technologies’.

In-Q-Tel ‘investment goals’ are focused on ‘return’ on technology – a ‘blend of factors’ that will ‘deliver strategic impact’ on the Agency [ CIA ] mission:

– Effective ‘deployments’ of innovative technologies to the CIA; – Commercially successful ‘companies that can continue’ to ‘deliver’ and ‘support’ innovate technologies; and, – Financial ‘returns to fund further technology investments’ to ‘benefit the Intelligence Community’.   Investing In Our National Security –

In just a few short years, In-Q-Tel has ‘evaluated’ nearly two thousand [ 2,000 ] ‘proposals’:

75% [ 1,500 ] of which have come from companies that had never previously considered working with the government.

To date, In-Q-Tel ‘established strategic relationships’ with more than ‘twenty’ ( 20 ) of these ‘companies’.

Read more about our ‘portfolio companies’ and ‘technology partners’, or learn how to submit a business plan to In-Q-Tel.

Areas Of Focus –

IN-Q-TEL focuses on next generation technologies for gathering, analyzing, managing and disseminating data. Learn more about our areas of focus:

Knowledge Management: [ http://web.archive.org/web/20020630223724/http://www.inqtel.com/tech/km.html ];

Security and Privacy: [ http://web.archive.org/web/20020630223724/http://www.inqtel.com/tech/sp.html ];

Search and Discovery: [ http://web.archive.org/web/20020630223724/http://www.inqtel.com/tech/sd.html ];

Distributed Data Collection: [ http://web.archive.org/web/20020630223724/http://www.inqtel.com/tech/dd.html ]; and,

Geospatial Information Services: [ http://web.archive.org/web/20020630223724/http://www.inqtel.com/tech/gi.html ].

Submit A Business Plan –

“In-Q-Tel also has garnered a reputation in the tech and VC [ Venture Capital ] worlds for being hard-nosed during due diligence. Unlike some venture firms, In-Q-Tel is staffed with hard-core techies who know how to put a program through the ringer. They’ve also got one of the roughest testing domains: the computer systems of the CIA.” – Washington Business Journal ( November 19, 2001 )

– View our criteria [ http://web.archive.org/web/20020630223724/http://www.inqtel.com/submit/index.html ] for submission, and apply for consideration online.

Media Resources –

– Investment Portfolio: [ http://web.archive.org/web/20020630223724/http://www.inqtel.com/news/attachments/InQTelInvestmentPortfolio.pdf ].

Reference

http://web.archive.org/web/20020630223724/www.inqtel.com/invest/index.html

– – – –

Circa: 2002

IN-Q-TEL

Investments –

Technology Partners ( 2002 ) –

INKTOMI [ http://www.inktomi.com ] ( Leading Edge Search and Retrieval Technology )

INKTOMI, based in Foster City, California ( USA ), has offices elsewhere in North America, Asia and Europe.

INKTOMI division business, involves:

Network Products – comprised of industry leading solutions for network caching, content distribution, media broadcasting, and wireless technologies; and,

Search Solutions – comprised of general Web search and related services, and ‘enterprise’ search.

Inktomi ‘develops’ and ‘markets’ network infrastructure software essential for ‘service providers’ and ‘global enterprises’.

Inktomi ‘customer’ and ‘strategic partner’ base of leading companies, include:

MERRILL LYNCH; INTEL: AT&T; MICROSOFT; SUN MICROSYSTEMS; HEWLETT-PACKARD; COMPAQ; DELL; NOKIA; AMERICA ONLINE ( AOL ); and, YAHOO.

SCIENCE APPLICATIONS INTERNATIONAL CORPORATION ( SAIC ) Lead System Integrator ( LSI ) – SAIC LSI [ http://www.saic.com/contractcenter/ites-2s/clients.html  ]

SCIENCE APPLICATIONS INTERNATIONAL CORPORATION ( SAIC ), founded in 1969 by Dr. J. R. Beyster who remained with SAIC for 30-years until at least November 3, 2003, has had as part of its management, and on its Board of Directors, many well known former U.S. government personnel, including:

– Melvin Laird, Secretary of Defense in the Richard Milhouse Nixon Presidential Administration;

– William Perry, Secretary of Defense in the William Jefferson Clinton Presidential Administration;

– John M. Deutch, U.S. Central Intelligence Agency ( CIA ) Director of Central Intelligence ( DCI ) in the William Jefferson Clinton Presidential Administration;

– U.S. Navy Admiral Bobby Ray Inman, U.S. National Security Agency ( NSA ) and U.S. Central Intelligence Agency ( CIA ) – various employed capacities in ‘both’ Agencies – in the Gerald Ford Presidential Administration, Billy Carter Presidential Administration and Ronald Reagan Presidential Administration;

– David Kay, who led the search for Weapons of Mass Destruction ( WMD ) – following the 1991 U.S. Persian Gulf War – for the United Nations ( UN ) and in the George W. Bush Sr. Presidential Administration following the 2003 U.S. invasion of Iraq.

In 2009, SCIENCE APPLICATIONS INTERNATIONAL CORPORATION ( SAIC ) moved corporate headquarters to Tysons Corner at 1710 SAIC Drive, McLean, Virginia ( USA ).

SCIENCE APPLICATIONS INTERNATIONAL CORPORATION ( SAIC ) is a scientific, engineering and technology ‘applications company’ with numerous ‘state government clients’, ‘federal government clients’, and ‘private sector clients’.

SCIENCE APPLICATIONS INTERNATIONAL CORPORATION ( SAIC ) works extensively, with:

U.S. Department of Defense ( DOD ); U.S. Department of Homeland Security ( DHS ); U.S. National Security Agency ( NSA ); U.S. intelligence community ( others ); U.S. government civil agencies; and, Selected commercial markets.

SCIENCE APPLICATIONS INTERNATIONAL CORPORATION ( SAIC ) Subsidiaries –

SAIC VENTURE CAPITAL CORPORATION; SCICOM TECHNOLOGIES NOIDA ( INDIA ); BD SYSTEMS ( BDS ); BECHTEL SAIC COMPANY LLC; BECK DISASTER RECOVERY ( BDR ); R.W. BECK; BENHAM; CLOUDSHIELD; DANET; EAGAN MCALLISTER ASSOCIATES INC.; HICKS & ASSOCIATES MEDPROTECT LLC REVEAL; SAIC-FREDERICK INC.; NATIONAL CANCER INSTITUTE ( NCI ); SAIC INTERNATIONAL SUBSIDIARIES; SAIC LIMITED ( UK ); CALANAIS ( SCOTLAND ); VAREC; APPLIED MARINE TECHNOLOGY CORPORATION; EAI CORPORATION; and, Others.

In 1991, SCIENCE APPLICATIONS INTERNATIONAL CORPORATION ( SAIC ) received transference of the U.S. Department of Defense ( DOJ ), U.S. Army ( USA ), Defense Intelligence Agency ( DIA ) ‘Remote Viewing Program’ renamed STARGATE Project.

In January 1999, SCIENCE APPLICATIONS INTERNATIONAL CORPORATION ( SAIC ) consultant Steven Hatfill saw SAIC vice president Joseph Soukup internally ( with no outside client ) commission ( with no outside client ) William C. Patrick – a retired leading figure in the legacy U.S. bioweapons program – see a report produced ( 28-pages on Feburary 1999 ) on terrorist anthrax attack possibilities via Unitd States postal mailings prior to 2001 anthrax attacks in the United States.

In March 2001, the U.S. National Security Agency ( NSA ) had SCIENCE APPLICATIONS INTERNATIONAL CORPORATION ( SAIC ) in ‘concept definition’ phase for what later became known as the NSA TRAILBLAZER Project, a “Digital Network Intelligence” system intended to ‘analyze data’ carried across computer ‘networks’.

In 2002, the U.S. National Security Agency ( NSA ) chose SCIENCE APPLICATIONS INTERNATIONAL CORPORATION ( SAIC ) to produce a ‘technology demonstration platform’ for the NSA TRAILBLAZER Project, a contract worth $280,000,000 million ( USD ).

TRAILBLAZER Project participants, included:

BOEING; COMPUTER SCIENCES CORPORATION ( CSC ); and, BOOZ ALLEN HAMILTON.

In 2005, TRAILBLAZER – believed by speculators ( http://www.PhysOrg.Com et. al. ) to be a continuation of an earlier data mining project THINTHREAD program – saw U.S. National Security Agency ( NSA ) Director Michael Hayden inform a U.S. Senate hearing that the TRAILBLAZER program required several hundred million dollars over budget – consequently trailing years behind schedule waiting for approvals.

From 2001 through 2005, SCIENCE APPLICATIONS INTERNATIONAL CORPORATION ( SAIC ) was primary contractor for the $600,000,000 million ( USD ) TRILOGY Program, a three ( 3 ) part program – intended to replace obsolete FBI computers with a then-new state-of-the-art cutting edge technology ‘secure high-speed computer network system’ that would install 500 computer network servers, 1600 scanners and thousands of desktop computers in FBI field offices – that on December 2003 delivered to the U.S. Department of Justice ( DOJ ) Federal Bureau of Investigation ( FBI ) its SAIC “Virtual Case File” ( VCF ), a $170,000,000 million ( USD ) software system designed to speed tracking of terrorists, better accurize communications amongst agents fighting criminals with this FBI ‘critical case management system’, however nineteen ( 19 ) different government managers involved 36 contract modifications averaging 1.3 FBI changes everyday totaling 399 changes during 15-months afterwhich the FBI continued arguing ( through its own intermediary, AEROSPACE CORPORATION ) changes until the U.S. Department of Justice ( DOJ ) Inspector General ( IG ) criticized its ‘FBI handling’ of SAIC software, whereon February 2005 SAIC ‘recommended’ the FBI at-least ‘begin using’ the SAIC TRILOGY VCF ‘case management system’.

On September 27, 2006 during a special meeting of SCIENCE APPLICATIONS INTERNATIONAL CORPORATION ( SAIC ) stockholders, employee-owners voted by a margin of 86% to proceed with the initial public offering ( IPO ) whereupon completion SCIENCE APPLICATIONS INTERNATIONAL CORPORATION ( SAIC ) also paid – to existing stockholders – a ‘special dividend’ of $1,600,000,000 billion to $2,400,000,000 billion ( USD ).

On October 17, 2006 SCIENCE APPLICATIONS INTERNATIONAL CORPORATION ( SAIC ) conducted an initial public offering ( IPO ) of common stock offering of 86,250,000 shares priced at $15.00 per share. Underwriters – BEAR STEARNS and MORGAN STANLEY – exercised over-allotment options resulting in 11,025,000 million shares seeing the IPO raise $1,245,000,000 billion ( USD ).

SCIENCE APPLICATIONS INTERNATIONAL CORPORATION ( SAIC ) had approximately 46,000 total employees, 16,000 employees were in McLean, Virginia ( USA ) and another 5,000 employees were in San Diego, California ( USA ).

SRA INTERNATIONAL INC. ( SRA ) [ http://www.sra.com/about-us/profile.php ]

SRA INTERNATIONAL INC., founded in 1978, headquartered in Fairfax, Virginia has additional U.S. offices.

SRA INTERNATIONAL INC. is a leading provider of information technology services and solutions to clients in national security, health care and public health, and civil government markets, requiring:

– Strategic Consulting; – Systems Design, Development, and Integration; – OutSourcing; and, – Operations Management.

SRA INTERNATIONAL INC. also delivers business solutions, for:

– Text mining; – Data mining; – Disaster and Contingency Response Planning; – Information Sssurance; – Environmental Strategies – Environmental Technology; – Enterprise Systems Management; and, – Wireless Integration.

SRA INTERNATIONAL INC. ORIONMagic ®

– –

Circa: 2002

IN-Q-TEL

Investments –

Portfolio Of Companies ( 2002 ) – Partial List

ARCSIGHT [ http://www.arcsight.com ] ( Security Management Software for The Enterprise )

ArcSight, founded in May 2000, is located in the heart of Silicon Valley, California ( USA ).

ArcSight is a leading supplier of enterprise software that provides the security “air traffic control system” for large, geographically dispersed organizations. These organizations are augmenting their network infrastructure with a wide variety of security devices such as firewalls, intrusion detection and identity management systems that produce a barrage of uncoordinated alarms and alerts that overwhelm the security staff.

With its ‘centralized view’ of ‘all security activity’ combined with ‘real time analysis’ of ‘events’, by both ‘operating at the perimeter and inside’ the organization, ArcSight provides a ‘single solution’, for:

Event capture; Log aggregation; Real time correlation; Incident investigation; and, Reporting.

ArcSight ‘separates’, the ‘true threats and attacks’ from the ‘millions of false alarms and non-threatening activities’ that occur each day, focusing attention and resources on high-priority problems.

The company has delivered enterprise, ‘security management solutions’ to leading ‘financial services’, ‘government’ and ‘manufacturing’ organizations while ‘attracting capital’ from ‘leading investors’, such as:

IN-Q-TEL; KLEINER PERKINS CAUFIELD & BYERS ( KPCB ); and, SUMITOMO CORPORATION.

ATTENSITY CORPORATION [ http://www.attensity.com ] ( Text Extraction for Threat Detection )

Attensity Corp., founded in 2000, is a privately held company with dual headquarters in Mountain View, California ( USA ) and Salt Lake City, Utah ( USA ).

Attensity Corp. provides enterprise, ‘analytic software’ and ‘services’, to:

Government agencies; and,

Fortune 500 companies.

Attensity has developed breakthrough text extraction technology that transforms information captured in free form text into structured, relational data.

Attensity enables government agencies to dramatically expand their analytical capabilities in the area of ‘threat detection’ by, powering:

Link analysis; Trending; Exception reporting; Other advanced analytics; and, Knowledge management applications.

Attensity technology is the culmination of nearly a decade [ 10-years ] of research in computational linguistics.

Attensity Corporation customers include:

IN-Q-TEL, a strategic venture group funded by the CIA; WHIRLPOOL; and, JOHN DEERE.

ATTENSITY CORPORATION investor, is:

IN-Q-TEL

BROWSE3D [ http://www.browse3d.com ] ( Advanced Web Navigation )

BROWSE3D, founded in 2000, is located in the Dulles Technology Corridor of northern Virgina.

The company’s first Knowledge Management product, the Browse3D Browser, enables Internet users to browse Web sites using a dynamic, interactive, 3 dimensional ( 3-D ) display environment.

One year later [ 2001 ] the Browse3D Browser was recognized as the Best Internet Software of 2001 at the COMDEX Fall Technology Show ( Las Vegas, Nevada, USA ).

Browse3D launched its ‘consumer product’ in January 2002.

For the past 2-years [ since 2000 ], Browse3D has been working to re-invent the online researcher’s tool set. A researcher’s ability to ‘harvest relevant online data’ is often limited by the tools available to view that data.

Future products and technologies promise additional improvements in the way users ‘find’, ‘organize’, ‘save’ and ‘exchange’ web-based ‘content’.

BROWSE3D early-stage venture funding provided, by:

IN-Q-TEL; and, angel investors.

CANDERA INC. [ http://www.candera.com ] ( Enterprise Storage )

Candera Incorporated, founded in 2000, is a development stage stealth mode company headquartered in Milpitas, California ( USA ).

Candera Inc. is developing a new generation, purpose built, network based storage management platform that gives businesses unprecedented ‘control over’ and ‘visibility into’ their networked storage environments.

With the Candera Confluence solution, businesses can dramatically improve the utilization of their existing heterogeneous storage assets by consolidating them into a centrally managed storage pool. These can then be quickly and dynamically allocated to meet the needs of current and future network based applications, giving large enterprises a strategic advantage.

Candera is building the first [ 1st ] system, of a new generation of systems, that will enable customers to unleash the ultimate value of networked information storage.

CONVERA [ http://www.convera.com ] ( Mission Critical Enterprise Search and Categorization Software )

Convera RetrievalWare is a high-performance intelligent search system that allows broad flexibility and scalability for implementation across corporate intranets and extranets, enabling users to index and search a wide range of distributed information resources, including text files, HTML, XML, over 200 proprietary document formats, relational database tables, document management systems and groupware repositories. Convera RetrievalWare excels in distributed client environments and server environments with hundreds or thousands of users, documents, images and / or multiple media assets.

Advanced search capabilities include concept and keyword searching, pattern searching and query by example.

Convera is a leading provider of enterprise mission-critical ‘search’, ‘retrieval’ and ‘categorizing’ solutions.

More than 800 customers – in 33 countries – rely on Convera search solutions to power a broad range of mission critical applications, including enterprise:

Portals; Knowledge management; Intelligence gathering; Profiling; Corporate policy compliance; Regulatory compliance; Customer service; and, More.

DECRU [ http://www.decru.com ] ( Secure Networked Storage )

Decru, founded in April 2001, is headquartered in Redwood City, California ( USA ).

Decru solves the problem of secure data storage with a robust, wire-speed encryption appliance that fits transparently into any SAN or NAS storage environment, protecting data from both internal and external threats.

Markets include essentially any organization with a need to protect proprietary or confidential information ( e.g. government, technology, financial services, health care ).

Investors, include:

IN-Q-TEL; NEA; GREYLOCK; and, BENCHMARK.

GRAVITRON [ http://www.graviton.com ] ( Early Warning Detection and Notification System for Homeland Security Over Wireless Mesh Networks )

GRAVITON, founded in 1999, is located in La Jolla, California, USA.

Solomon Trujillo, former head of U.S. WEST ( baby bell telephone company ), leads GRAVITRON.

GRAVITON is on leading edge of a fledgling ( small ) industry, known as:

Machine to Machine Communications ( M2M ).

GRAVITON is developing an advanced integrated wireless sensor platform uniquely optimized for large-scale distributed sensor network applications working with Micro Electro Mechanical Systems ( MEMS ) sensor and spread spectrum wireless technologies licensed exclusively to GRAVITON from the U.S. National Laboratory at Oakridge ( also known as ) Oakridge National Laboratory ( Tennessee, USA ) – managed by the U.S. Department of Energy ( DOE ).

GRAVITON products and solutions integrate wireless, sensor and data management technology enabling enterprises to efficiently and transparently monitor, control, send, receive, and update system information from devices anywhere in the world.

GRAVITON is supported and funded by a number of corporate partners and investors, including:

IN-Q-TEL; GLOBAL CROSSING; ROYAL DUTCH SHELL ( oil / petroleum ); MITSUI; SIEMENS; QUALCOM; OMRON; MOTOROLA; and, SUN MICROSYSTEMS.

GRAVITON ‘primary’ financial investors, include:

MERRILL LYNCH;

GRAVITON ‘venture capital’ firms, include:

KLEINER PERKINS CAUFIELD & BYERS ( KPCB ); and, EARLYBIRD.

INTELLISEEK [ http://www.intelliseek.com ] ( Enterprise Intelligence Solutions )

INTELLISEEK, founded in 1997, has since 1998 been changing the way organizations ‘understand’, ‘gather’ and ‘use’ enterprise ‘intelligence’.

INTELLISEEK ‘knowledge discovery tools’ [ as of: 2002 ] enable the nation’s largest enterprises with up-to-the-minute consumer, industry information and ‘competitive intelligence’.

INTELLISEEK ‘Enterprise Search Server’™ ( ESS ) search platform provides a suite of intelligent applications that automate ‘knowledge discovery’ and ‘knowledge aggregation’ from hundreds of disparate, and often hard-to-locate data sources.

INTELLISEEK ‘Knowledge Management’ and ‘Search and Discovery’ solutions solve the fundamental problem of “information overload” by identifying and searching relevant, targeted and personalized content from the internet, intranets and extranets.

INTELLISEEK clients, include:

FORD MOTOR COMPANY ( FOMOCO ); NOKIA; and, PROCTOR AND GAMBLE.

Investors include:

IN-Q-TEL; FORD VENTURES; RIVER CITIES CAPITAL; GENERAL ATLANTIC PARTNERS LLC; FLAT IRON PARTNERS; BLUE CHIP VENTURE COMPANY; NOKIA VENTURES; and, Other private investors.

METACARTA [ http://www.metacarta.com ] ( Geospatial Data Fusion )

MetaCarta, established in 1999, was launched on more than $1,000,000 million in funding from the U.S. Department Of Defense ( DOD ) Defense Advanced Projects Agency ( DARPA ) and private investors.

MetaCarta CEO John Frank, with a doctorate from the Massachusets Institute Of Technology ( MIT ) where during 1999 – as a Hertz Fellow in physics working on a PhD – conceived a new way to view – geographically – ‘collections of text’ that later saw MetaCarta combine his interests in algorithms, information design, and scientific models of real world phenomena.

Metacarta provides a new knowledge management platform that integrates ‘text data with geography’ providing a ‘cohesive system’ for ‘problem solving’.

METACARTA Geographic Text Search ( GTS ) appliance, the software solution, redefines how people interact with information, enabling analysts to view text reports and geographic information in one ( 1 ) logical view through integration of text and geography delivering new information not obtainable from any other source.

MetaCarta CEO John Frank graduated from Yale University.

MOHOMINE [ http://www.mohomine.com ] ( Transforming Unstructured Multi-Language Data Into Actionable Information )

MOHOMINE, founded in 1999, is privately-held and located in San Diego, California, USA.

MOHOMINE technology has been deployed by United States national security organizations.

MOHOMINE mohoClassifier for National Security Organizations ™ reviews ‘text information’ in ‘cables’, ‘e-mails’, ‘system files’, ‘intranets’, ‘extranets’ and ‘internet’ providing ‘automated document classification’, ‘routing’ – based upon ‘learn-by-example pattern recognition’ technology – and ‘reports’ on user defined properties such as ‘topic’, ‘subject’, ‘tone’ ( ‘urgent’, plus others ), ‘author’, ‘source’ ( geographic locations, ‘country’, etc. ), and more.

MOHOMINE mohoClassifier users can easily set up ‘filters’ to automatically ‘identify’ and ‘prioritize’ ( ‘read first’ requirement ) documents that are quickly processed – out-of large volumes of other data – and then quickly route prioritized information to quickly reach the proper people.

MOHOMINE, from Global 5000, currently [ since 2002 ] has more than one hundred fifty ( 150 ) customers across numerous vertical industries, including:

CITICORP; WELLS FARGO; INTEL; TEXAS INSTRUMENTS; PFIZER; BOEING; ORACLE; PEOPLESOFT; and, NIKE.

MOHOMINE investors, include:

IN-Q-TEL; HAMILTON APEX TECHNOLOGY VENTURES; and, WINDWARD VENTURES.

QYNERGY CORPORATION [ http://www.qynergy.com ] ( Long-Lasting Power Solutions For Multiple Applications And Small-Tech )

QYNERGY CORP., founded in 2001, is located in Albuquerque, New Mexico.

QYNERGY technology originated at the U.S. National Laboratory at Sandia ( also known as ) Sandia National Laboratories ( New Mexico, USA ) and at the University of New Mexico ( New Mexico, USA ).

QYNERGY Corp. develops leading-edge energy solutions based on QYNERGY proprietary QynCell ™ technology that made an exciting breakthrough – over other ‘battery’ or ‘portable energy’ devices – in ‘materials science’ allowing QYNERGY to possess several unique competitive advantages.

QYNERGY QynCell ™ is an ‘electrical energy device’ revolution, providing:

Long-lived Batteries – QynCell usable life is potentially over a period of ‘several decades’ ( 10-year multiples ), during which time the QynCell device ‘does not require external charging’;

Miniature and Micro Applications – QynCell™ technology is scaleable, thus can be ‘miniaturized’, for:

Micro Electro Mechanical Systems ( MEMS ); MicroPower™ applications; Small microelectronics; and, Power-on-a-chip applications.

SAFEWEB [ http://www.safewebinc.com ] ( Secure Remote Access )

SAFEWEB, established in April 2000, is based in Emeryville, California, USA.

SAFEWEB built the world’s largest ‘online privacy network’, however in 2001 its ‘free online service’ was ‘concluded’ – to focus on developing its ‘enterprise’ product.

SAFEWEB is a leading provider of innovative security and privacy technologies that are effective, economical and simple.

SAFEWEB Secure Extranet Appliance ( SEA ), the first [ 1st ] SAFEWEB enterprise security release – reduces the cost and complexity traditionally involved in securing corporate network resources.

SAFEWEB Secure Extranet Appliance ( SEA ), named Tsunami, is a fundamental ‘redesign of extranet architecture’ integrating disparate technologies into a ‘modular plug-in network appliance’ ( SEA Tsunami).

SAFEWEB SEA Tsunami is an ‘all-in-one solution’ simplifying implementation of ‘extranets’ and ‘Virtual Private Networks’ ( VPN ) reducing Total Cost of Ownership ( TCO ) by innovative architecture letting companies build – in less than 1-hour – ‘secure extranets’ providing ‘remote stationed’ enablement of ‘employees’, ‘clients’ and ‘partners’ to access ‘internal applications’ and ‘secure data’ from anywhere using a standard internet website browser.

SAFEWEB delivers, through established strategic partnerships, customized versions of its Secure Extranet Appliance ( SEA ) Tsunami technology to U.S. intelligence [ CIA, etc. ] and communications agencies [ NSA, etc. ].

SAFEWEB investors, include:

IN-Q-TEL; CHILTON INVESTMENTS; and, KINGDON CAPITAL.

STRATIFY INCORPORATED [  ] ( Unstructured Data Management Software )

In 1999, PURPLE YOGI was founded by former INTEL Microcomputer Research Laboratory scientists Ramana Venkata and Ramesh Subramonian.

PURPLE YOGI, became known as STRATIFY INCORPORATED ( a privately-held company ).

In early 2001, ORACLE CORPORATION veteran and senior executive Nimish Mehta became president and chief executive officer ( CEO ).

STRATIFY INC., headquartered in Mountain View, California ( USA ), is [ 2002 ] the ‘emerging’ leader in ‘unstructured data management’ software.

STRATIFY Discovery System is a ‘complete enterprise software platform’ helping todays [ 2002 ] organizations ‘harness vast information overload’ by ‘automating the process’ of ‘organizing’, ‘classifying’ and ‘presenting’ business-critical unstructured information usually found in ‘documents’, ‘presentations’ and internet website pages.

STRATIFY Discovery System platform ‘transforms unstructured internal and external data’ into ‘immediately accessible relevant information’ automatically organizing millions of documents displayed in easy navigational hierarchy.

STRATIFY INC. clients, include:

INLUMEN and INFOSYS TECHNOLOGIES LIMITED, named in 2001 as one ( 1 ) of The Red Herring 100.

STRATIFY INC. received funding, from:

IN-Q-TEL; H & Q AT INDIA (also known as ) H & Q ASIA PACIFIC; SOFTBANK VENTURE CAPITAL ( now known as ) MOBIUS VENTURE CAPITAL; SKYBLAZE VENTURES LLC; and, INTEL CAPITAL.

SRD [ http://www.srdnet.com ] ( Near Real Time Data Warehousing and Link Analysis )

SYSTEMS RESEARCH & DEVELOPMENT ( SRD ), founded in 1983, develops software applications to combat fraud, theft, and collusion.

SYSTEMS RESEARCH & DEVELOPMENT Non-Obvious Relationship Awareness ™ ( NORA ™ ) was originally developed for the gambling casino gaming industry

SYSTEMS RESEARCH & DEVELOPMENT NORA software is designed to identify correlations across vast amounts of structured data, from hundreds or thousands of data sources, in near real-time, and alert users to potentially harmful relationships between and among people.

SRD NORA software technology leverages SYSTEMS RESEARCH & DEVELOPMENT proven expertise in ‘aggregating’, ‘warehousing’ and ‘leveraging people data’ and ‘transaction data’ to strengthen corporate management and security systems.

SYSTEMS RESEARCH & DEVELOPMENT clients [ 2002 ], include:

U.S. Depaartment of Defense ( DOD ); CENDANT; TARGET; MGM MIRAGE; MANDALAY BAY RESORT GROUP; and, Food Marketing Institute.

TACIT [ http://www.tacit.com ] ( Enterprise Expertise Automation )

TACIT, founded in 1997, is located in Palo Alto, California ( USA ) with regional sales offices in Virginia, Maryland, Pennsylvania and Illinois.

David Gilmour serves as president and chief executive officer ( CEO ).

TACIT Knowledge Systems is the pioneer and leader in ‘Enterprise Expertise Automation’.

TACIT products ‘automatically and continuously inventories’ the ‘skills’ and ‘work focus’ of an ‘entire organization’ for ‘dynamic location’ of ‘connections to expertise needed’ – when needed to make decisions, solve problems, and serve customers.

TACIT products also include its award winning flagship product KnowledgeMail™. In June 200, TACIT was voted one of the “Hot 100 Private Companies,” by Upside Magazine.

In 2000 and 2001, TACIT was one ( 1 ) of the “100 Companies that Matter,” by KM World [ Knowledge Management World ].

TACIT attracted a ‘world class advisory board’ with interest from ‘venture capital’ and Fortune 500 ‘clients’ of ‘enterprise’ and ‘customers’, including:

IN-Q-TEL; JP MORGAN; CHEVRON-TEXACO ( petroleum and chemical ); UNISYS; HEWLETT-PACKARD; NORTHROP-GRUMAN ( aerospace & defense ); and, ELI LILLY ( pharmaceuticals ).

TACIT investors, include:

IN-Q-TEL; DRAPER FISHER JURVETSON; REUTERS GREENHOUSE FUND; and, ALTA PARTNERS.

TRACTION SOFTWARE [ http://www.tractionsoftware.com ] ( Harvest and Use Information from All Sources )

TRACTION SOFTWARE, founded in 1996, is located in Providence, Rhode Island ( USA ).

TRACTION® Software is the leader in ‘Enterprise Weblog’ software, bringing together working ‘communications’, ‘knowledge management’, ‘content management’, ‘collaboration’, and the ‘writable intranet portal’.

TRACTION TeamPage™ product addresses the need for ‘unified on-demand view’ of ‘team content’ and ‘team communication’ from ‘all document sources’ in ‘context’ and over ‘time’.

TRACTION TeamPage deploys quickly and easily on an existing network and delivers a ‘capstone communication system’ by turning ‘e-mail’ and ‘web browser’ into powerful tools for end-users.

TeamPage targets ‘program teams’ and ‘product management teams’ in ‘government’ and ‘business’.

TRACTION also supports a wide range of applications and business processes, including but not limited, to:

Business Intelligence and Market Research;

Collection Highlighting and Media Distribution;

Investor Relations E-Mail and Public Relations E-Mail Triage and Response; and,

Tracking Exception Process and Reporting Exception Process.

TRACTION SOFTWARE investors, include:

IN-Q-TEL; SLATER CENTER FOR INTERACTIVE TECHNOLOGY; and, private investors.

ZAPLET INCORPORATED [ http://www.zaplet.com ] ( Enterprise Collaboration Tools For Email )

ZAPLET INC., founded in 1999, is located in Redwood Shores, California ( USA ).

ZAPLET INC. is an enterprise software and services company and creator of the Zaplet Appmail System™ collaboration software that brings application functionality directly to a user’s inbox to complete business processes.

ZAPLET INC. Appmail, using a server-based platform, combines power, ‘centralized control’ and ‘robust security’ for traditional enterprise application systems with the convenience and ease-of-use of e-mail.

ZAPLET Appmail in-box becomes the gateway to a protected server where the application functionality and data securely reside.

Zaplet™ Appmail can be used, to:

Manage and Streamline mission-critical business processes; Requires no additional client-side upgrades; and, Instantly expandable for work teams ‘beyond’ the ‘enterprise’.

ZAPLET INC. has received numerous awards, including:

Red Herring 100; Enterprise Outlook – Investors’ Choice; and, Internet Outlook – Investors’ Choice.

ZAPLET INC. customers, include leading companies, in:

Finance; Telecommunication; High technologies; and, Government.

ZAPLET INC. is backed by world class investors, including:

KLEINER PERKINS CAUFIELD & BYERS ( KPCB ); ACCENTURE TECHNOLOGY VENTURES; QUESTMARK PARTNERS L.P.; RESEARCH IN MOTION LIMITED ( RIM ); INTEGRAL CAPITAL PARTNERS; ORACLE CORPORATION; CISCO SYSTEMS INC.; and, NOVELL INC.

– –

Circa: 2010

IN-Q-TEL

Investments –

Portfolio of Companies ( 2010 ) – Partial List

3VR Security AdaptivEnergy Adapx Arcxis Asankya Basis Technology Bay Microsystems CallMiner Cambrios Carnegie Speech CleverSafe ( SAIC ) CopperEye Destineer Elemental Technlogies Ember Corporation Endeca Etherstack FEBIT FireEye FluiDigm

Reference(s)

http://web.archive.org/web/20020630223724/http://www.inqtel.com/news/attachments/InQTelInvestmentPortfolio.pdf http://www.iqt.org/technology-portfolio/orionmagic.html http://defense-ventures.com/in-q-tel/

– – – –

Research References

“Information Technology Trends And Their Impact On CIA,” January 1999, declassified report of the U.S. Central Intelligence Agency by, CIA Chief Information Officer.

– – – –

 

Submitted for review and commentary by,

 

Concept Activity Research Vault ( CARV ), Host
E-MAIL: ConceptActivityResearchVault@Gmail.Com
WWW: http://ConceptActivityResearchVault.WordPress.Com

/

/