We don’t call it MK Ultra, we call it Neuroscience, we call it what DARPA calls it, we call it what Harvard calls it, we call it what MIT calls it. MK Ultra is not some historical event that happened once in the 60s and then went away. It has evolved into the emerging technologies industry and is fully deployed on the population under the auspices of “Neurosciences”.  This is an explanation of the UPDATED MK ULTA systems that are operating today. Hope & Tivon join Maria Zeee to discuss emerging technologies in transhumanism, MK Ultra for the masses through nanotech and neuroscience, WBAN, Digital Twins, mind reading technology weaponised against humanity and more!

Watch here on Rumble:  Hope & Tivon – MK Ultra For the Masses: Nanotech & Neuroscience

Download the slides here:
Neuroscience MK Ultra for the Masses Zeee Media


Emerging Technologies is Transhumanism
Bioluminescent Interface Why are People Faces Glowing?
Neuroscience is MK Ultra for the Masses
Brain Initiative and Graphene Flagship
Brain Interface Technology
Mind Reading Technology Via Neural Interfaces
Internet of Behaviors
Digital Twins and Memristors
Digital Twins in HealthCare
Wireless Drugging and the Digital Nervous System
c40 KillBox Cities and Remote Monitoring
5th Generation Warfare
Cognitive Warfare NATO and Brain Interfaces
Neural Lobotomy: DARPA N3’s
NVIDIA and AMD to run the Tech
Merging Man With Machine Through our DNA
WBAN The Evil Web to Ensnare Humanity
Protect Yourself from Emerging Technology

View Hope & Tivon’s EMF protection products and more via this link: https://ftwproject.com/ref/468

If you would like to support Zeee Media to continue getting the truth out to more people, you can donate via this link: https://donate.stripe.com/6oEdUL2eF1IAdXibII

Website: https://www.zeeemedia.com

Notes from the Show:

From the neuron doctrine to neural networks


For over a century, the neuron doctrine — which states that the neuron is the structural and functional unit of the nervous system — has provided a conceptual foundation for neuroscience. This viewpoint reflects its origins in a time when the use of single-neuron anatomical and physiological techniques was prominent. However, newer multineuronal recording methods have revealed that ensembles of neurons, rather than individual cells, can form physiological units and generate emergent functional properties and states. As a new paradigm for neuroscience, neural network models have the potential to incorporate knowledge acquired with single-neuron approaches to help us understand how emergent functional states generate behaviour, cognition and mental disease.

Brain Initiative Brain to Brain Computer Interface

The Brain Initiative:

The Brain Research Through Advancing Innovative Neurotechnologies® Initiative, or The BRAIN Initiative®, is a partnership between Federal and non-Federal partners with a common goal of accelerating the development of innovative neurotechnologies. Through the application and dissemination of these scientific advancements, researchers will be able to produce a revolutionary new dynamic picture of the brain that, for the first time, shows how individual cells and complex neural circuits interact in both time and space.

The endeavor brings together neuroscientists with nanotechnology specialists and materials engineers to solve issues such as applying electrical stimulus to very small groups of neurons, which may make it possible to treat brain conditions with vastly improved precision.


Noted is the Graphene Flagship Spinoff Project called Inbrain

Graphene Flagship spin-off INBRAIN Neuroelectronics develops intelligent graphene-based neural implants for personalised therapies in brain disorders.

INBRAIN Neuroelectronics is a spin-off company of Graphene Flagship partners the Catalan Institute of Nanoscience and Nanotechnology (ICN2) and ICREA, Spain. It was established in 2019, at the intersection between MedTech, DeepTech and Digital Health, with a mission to decode brain signals to devise medical solutions for patients with epilepsy, Parkinson’s disease and other neurological disorders. The company designs small implantable brain intelligent systems, with the ability to interpret brain signals with unprecedented high fidelity, producing a therapeutic response adapted to the clinical condition of each specific patient.


Neuroscientists Demonstrate Direct Brain-to-Brain Communication in Humans 2014

In a groundbreaking study, scientists led by Dr Giulio Ruffini of Starlab Barcelona, Spain, have successfully transmitted the words ‘hola’ and ‘ciao’ in a brain-to-brain transmission between two human subjects using Internet-linked electroencephalogram (EEG) and robot-assisted and image-guided transcranial magnetic stimulation (TNS) technologies.

Using EEG, the scientists translated the words ‘hola’ and ‘ciao’ into binary code and then emailed the results from India to France.

There a computer-brain interface transmitted the message to the receiver’s brain through noninvasive brain stimulation.

The subjects experienced this as phosphenes, flashes of light in their peripheral vision.

The light appeared in numerical sequences that enabled the receiver to decode the information in the message, and while the subjects did not report feeling anything, they did correctly receive the words.

BrainNet: A Multi-Person Brain-to-Brain Interface for Direct Collaboration Between Brains 2019

“We present BrainNet which, to our knowledge, is the first multi-person non-invasive direct brain-to-brain interface for collaborative problem solving. The interface combines electroencephalography (EEG) to record brain signals and transcranial magnetic stimulation (TMS) to deliver information noninvasively to the brain. The interface allows three human subjects to collaborate and solve a task using direct brain-to-brain communication. Two of the three subjects are designated as “Senders” whose brain signals are decoded using real-time EEG data analysis. The decoding process extracts each Sender’s decision about whether to rotate a block in a Tetris-like game before it is dropped to fill a line. The Senders’ decisions are transmitted via the Internet to the brain of a third subject, the “Receiver,” who cannot see the game screen. The Senders’ decisions are delivered to the Receiver’s brain via magnetic stimulation of the occipital cortex.

…..Our results point the way to future brain-to-brain interfaces that enable cooperative problem solving by humans using a “social network” of connected brains.”

Brain Net

Synchron | The Brain Unlocked

The Brain-Computer Interface Giving an ALS Patient a Voice and Control

The brain communication is not the main issue, its your whole body that they are communicating with, your spinal chord your nervous system and the AI is the problem. People don’t understand that they have replaced your 6th sense with AI.

Battelle to Develop Injectable, Bi-Directional Brain Computer Interface

You don’t need a chip, you don’t need implants,  you don’t need headsets. They can use the magnetic fields, memristors (electrical component relating electric charge and magnetic flux). You don’t need anything but your wireless now. Using AI “stable diffusion” (text to image).

Memristor-Based Intelligent Human-Like Neural Computing



Humanoid robots, intelligent machines resembling the human body in shape and functions, cannot only replace humans to complete services and dangerous tasks but also deepen the own understanding of the human body in the mimicking process. Nowadays, attaching a large number of sensors to obtain more sensory information and efficient computation is the development trend for humanoid robots. Nevertheless, due to the constraints of von Neumann-based structures, humanoid robots are facing multiple challenges, including tremendous energy consumption, latency bottlenecks, and the lack of bionic properties. Memristors, featured with high similarity to the biological elements, play an important role in mimicking the biological nervous system. The memristor-based nervous system allows humanoid robots to obtain high energy efficiency and bionic sensing properties, which are similar properties to the biological nervous system. Herein, this article first reviews the biological nervous system and memristor-based nervous system thoroughly, including the structures and also the functions.

Meet your digital twin.   Notice the Aleister Crowley Hexagram.

In Aleister Crowley‘s Thelema, the hexagram is usually depicted with a five-petalled flower in the centre which symbolises the pentagram. The hexagram represents the heavenly macrocosmic or planetary forces and is a symbol equivalent to the Rosicrucian Rose Cross or ancient Egyptian ankh. The five petals of the flower represent the microcosmic forces of 5 elements of the magical formula YHShVH and is a symbol equivalent to the pentagram or pentacle. The two symbols together represent the interweaving of the planetary and elemental forces.[4]

This is also the hexagram we were seeing in the early years to represent body area networks.

The AI is connecting all of the artificial neural networks to the biological neural networks and it is seen in the apps on their phones and utilized by the electronic disease surveillance systems, to discern where you are and how much energy has been harvested out of you and if you are physically where they think you are supposed to be.

They have the brain unlocked with their own specialized in the brain interfaces that have software.  I’ll come back to you with all the app listing

Programmable Metamaterials for Software-Defined Electromagnetic Control: Circuits, Systems, and architectures


They support a network architecture that can support “fine grained localization”
Fine-grained recognition [1]–[7] refers to the task of distin- guishing sub-ordinate categories such as bird species [8], [9], dog breeds [10], aircraft [11], or car models [12], [13]. It is one of the cornerstones of object recognition due to the potential to make computers rival human experts in visual understanding.

They are classifying cell structure at the nano scale and are using the same “stable diffusion” methods they use with AI. (chat GPT)  This is all being done on the terahertz spectrum. This technology is being used to watch things at the nanoscale, to open and close cells bio-electrically at the level of the ion channel, using optigenetics and software defined metamaterial.  Communicating from your skin deeper into your body, using computer networking in the body to tell the different nodes in your body what to do. Using your heart rate to drive, and using the pressure in your veins and arteries to scoot itself along.

Stable Diffusion:
Stable Diffusion is a deep learning, text-to-image model released in 2022 based on diffusion techniques. It is considered to be a part of the ongoing AI boom.

It is primarily used to generate detailed images conditioned on text descriptions, though it can also be applied to other tasks such as inpainting, outpainting, and generating image-to-image translations guided by a text prompt.[3]


Deep Learning Intervention for Health Care Challenges: Some Biomedical Domain Considerations 2019

Abstract and Figures

The use of deep learning (DL) for the analysis and diagnosis of biomedical and health care problems has received unprecedented attention in the last decade. The technique has recorded a number of achievements for unearthing meaningful features and accomplishing tasks that were hitherto difficult to solve by other methods and human experts. Currently, biological and medical devices, treatment, and applications are capable of generating large volumes of data in the form of images, sounds, text, graphs, and signals creating the concept of big data. The innovation of DL is a developing trend in the wake of big data for data representation and analysis. DL is a type of machine learning algorithm that has deeper (or more) hidden layers of similar function cascaded into the network and has the capability to make meaning from medical big data. Current transformation drivers to achieve personalized health care delivery will be possible with the use of mobile health (mHealth). DL can provide the analysis for the deluge of data generated from mHealth apps. This paper reviews the fundamentals of DL methods and presents a general view of the trends in DL by capturing literature from PubMed and the Institute of Electrical and Electronics Engineers database publications that implement different variants of DL. We highlight the implementation of DL in health care, which we categorize into biological system, electronic health record, medical image, and physiological signals. In addition, we discuss some inherent challenges of DL affecting biomedical and health domain, as well as prospective research directions that focus on improving health management by promoting the application of physiological signals and modern internet technology.

Deep Learning

Deep learning is a method in artificial intelligence (AI) that teaches computers to process data in a way that is inspired by the human brain.

Deep learning is the subset of machine learning methods based on artificial neural networks (ANNs) with representation learning. The adjective “deep” refers to the use of multiple layers in the network. Methods used can be either supervised, semi-supervised or unsupervised.[2]

Deep-learning architectures such as deep neural networks, deep belief networks, recurrent neural networks, convolutional neural networks and transformers have been applied to fields including computer vision, speech recognition, natural language processing, machine translation, bioinformatics, drug design, medical image analysis, climate science, material inspection and board game programs, where they have produced results comparable to and in some cases surpassing human expert performance.[3][4][5]

Artificial neural networks were inspired by information processing and distributed communication nodes in biological systems. ANNs have various differences from biological brains. Specifically, artificial neural networks tend to be static and symbolic, while the biological brain of most living organisms is dynamic (plastic) and analog.[6][7] ANNs are generally seen as low quality models for brain function.[8]

Neural networks, a type of computing system loosely modeled on the organization of the human brain, form the basis of many artificial intelligence systems for applications such speech recognition, computer vision, and medical image analysis.

A neural network is a machine learning program, or model, that makes decisions in a manner similar to the DARPA’s New Project Is Investing Millions in Brain-Machine Interface Tech

human brain, by using processes that mimic the way biological neurons work together to identify phenomena, weigh options and arrive at conclusions.

The differences between Artificial and Biological Neural Networks

The idea behind perceptrons (the predecessors to artificial neurons) is that it is possible to mimic certain parts of neurons, such as dendrites, cell bodies and axons using simplified mathematical models of what limited knowledge we have on their inner workings: signals can be received from dendrites, and sent down the axon once enough signals were received. This outgoing signal can then be used as another input for other neurons, repeating the process. Some signals are more important than others and can trigger some neurons to fire easier. Connections can become stronger or weaker, new connections can appear while others can cease to exist. We can mimic most of this process by coming up with a function that receives a list of weighted input signals and outputs some kind of signal if the sum of these weighted inputs reach a certain bias. Note that this simplified model does not mimic neither the creation nor the destruction of connections (dendrites or axons) between neurons, and ignores signal timing. However, this restricted model alone is powerful enough to work with simple classification tasks.

Main Differences in artificial and biological:
Size, Topology, Speed, Fault-tolerance, power consumption, signals, learning,

A smart Healthcare monitoring system for heart disease prediction based on ensemble deep learning and feature fusion

Mind control system for human interfaces


The present invention relates to a mind control system for controlling a human using a brain wave map and a genetic map. According to the present invention, the mind control system comprises: a nano-electronic chip (NEC) inserted or attached to a human body to transmit and receive a wireless signal; at least one nano-biosensor (NBS) for transmitting an electrical signal generated from the NEC to nerve cells and human cells, and embedded or connected to the NEC for sensing biometric information; and a main super computer (MSC) for transmitting the wireless signal to the NEC to arbitrarily control the nerve cells and genes of the human body, and also used in wired Internet. According to the present invention, a cranial nerve can be controlled such that the physically handicapped can perform an intended action. Also, the mind control system of the present invention is expected to be used for a trading means and a financial transaction including confirmation of personal identification.

Neural Interfaces, The Game Changer Of Wireless Interaction

Arrival Of Neural Interfaces

Neural interfaces had already arrived and it offers the potential to support a wide range of activities in a wide range of settings. Mudra, for example, has created an Apple Watch band that allows users to interact with the device simply by moving their fingers — or thinking about moving their fingers. That means someone using the device can listen to music or make phone calls without having to stop what they’re doing. It also opens up enormous possibilities for making technology available to people with disabilities who have difficulty with other user interfaces.

Think of this technology as using your fingers, voice, or eyes to control a computer or play a game. It sounds like science fiction, but it’s becoming more real by the day thanks to a few companies developing technology that detects neural activity and converts those measurements into signals computers can understand.

NextMind’s Journey In Neural Interfaces

One of those companies, NextMind, has been shipping its version of the mind-reading technology to developers for over a year. First unveiled at CES in Las Vegas, the company’s neural interface is a black circle that can read brain waves when strapped to the back of a user’s head.
BrainGate: First Human Use Of High-Bandwidth Wireless Brain-Computer Interface 2021

Brain-computer interfaces (BCIs) are an emerging assistive technology, enabling people with paralysis to type on computer screens or manipulate robotic prostheses just by thinking about moving their own bodies. For years, investigational BCIs used in clinical trials have required cables to connect the sensing array in the brain to computers that decode the signals and use them to drive external devices.

Now, for the first time, BrainGate clinical trial participants with tetraplegia have demonstrated use of an intracortical wireless BCI with an external wireless transmitter. The system is capable of transmitting brain signals at single-neuron resolution and in full broadband fidelity without physically tethering the user to a decoding system. The traditional cables are replaced by a small transmitter about 2 inches in its largest dimension and weighing a little over 1.5 ounces. The unit sits on top of a user’s head and connects to an electrode array within the brain’s motor cortex using the same port used by wired systems.

Nanotransducers for wireless neuromodulation




Understanding the signal transmission and processing within the central nervous system is a grand challenge in neuroscience. The past decade has witnessed significant advances in the development of new tools to address this challenge. Development of these new tools draws diverse expertise from genetics, materials science, electrical engineering, photonics, and other disciplines.

Among these tools, nanomaterials have emerged as a unique class of neural interfaces because of their small size, remote coupling and conversion of different energy modalities, various delivery methods, and mitigated chronic immune responses.

In this review, we will discuss recent advances in nanotransducers to modulate and interface with the neural system without physical wires.

Nanotransducers work collectively to modulate brain activity through optogenetic, mechanical, thermal, electrical, and chemical modalities. We will compare important parameters among these techniques, including the invasiveness, spatiotemporal precision, cell-type specificity, brain penetration, and translation to large animals and humans.

Important areas for future research include a better understanding of the nanomaterials-brain interface, integration of sensing capability for bidirectional closed-loop neuromodulation, and genetically engineered functional materials for cell-type-specific neuromo- dulation.

Figure 2. Collection of recently reported nanotransducers for neuromodulation


(A) Nanotransducers for optogenetics;38,39 (B) optical9,40–44,64 and magnetic transducers7,13,22,50,51 for thermal modulation; (C) nanotransducers for mechanical modulation (left, optomechanical transducers10,61,92; middle, magnetomechanical transducers;14,52 right, genetically encoded transducers31,32,53,54,93); (D) nanotransducers for electrical modulation (left, optoelectronic transducers11,45–47,94,95; middle, magnetoelectric transducers12,96; right, piezoelectric transducers30,62); (E) nanotransducers for chemical modulation (left, transducers for opto- uncaging8,48,49; middle, transducers for magneto-uncaging34,55–57; right, transducers for sono- uncaging4,35,36). MLNPs, mechanoluminescent nanoparticles; SPNs, semiconducting polymer nanoconjugates, MNPs, magnetic nanoparticles; PFCs, fluorocarbons (e.g., perfluorobutane [PFB] and perfluoropentane [PFP]).

Improving the Security of the IEEE

802.15.6 Standard for Medical BANs

FIGURE 8. Overview of use case 1: Neural-dust sensors and actuators spread across the brain, communicating with sub-dural transceivers that relay data to and from an external transceiver.

Digital Twins


Digital Twins: From Personalised Medicine to Precision Public Health

A digital twin is a virtual model of a physical entity, with dynamic, bi-directional links between the physical entity and its corresponding twin in the digital domain. Digital twins are increasingly used today in different industry sectors. Applied to medicine and public health, digital twin technology can drive a much-needed radical transformation of traditional electronic health/medical records (focusing on individuals) and their aggregates (covering populations) to make them ready for a new era of precision (and accuracy) medicine and public health. Digital twins enable learning and discovering new knowledge, new hypothesis generation and testing, and in silico experiments and comparisons. They are poised to play a key role in formulating highly personalised treatments and interventions in the future. This paper provides an overview of the technology’s history and main concepts. A number of application examples of digital twins for personalised medicine, public health, and smart healthy cities are presented, followed by a brief discussion of the key technical and other challenges involved in such applications, including ethical issues that arise when digital twins are applied to model humans.

The Digital Twin in Medicine: A Key to the Future of Healthcare?


There is a growing need for precise diagnosis and personalized treatment of disease in recent years. Providing treatment tailored to each patient and maximizing efficacy and efficiency are broad goals of the healthcare system. As an engineering concept that connects the physical entity and digital space, the digital twin (DT) entered our lives at the beginning of Industry 4.0. It is evaluated as a revolution in many industrial fields and has shown the potential to be widely used in the field of medicine. This technology can offer innovative solutions for precise diagnosis and personalized treatment processes. Although there are difficulties in data collection, data fusion, and accurate simulation at this stage, we speculated that the DT may have an increasing use in the future and will become a new platform for personal health management and healthcare services. We introduced the DT technology and discussed the advantages and limitations of its applications in the medical field. This article aims to provide a perspective that combining Big Data, the Internet of Things (IoT), and artificial intelligence (AI) technology; the DT will help establish high-resolution models of patients to achieve precise diagnosis and personalized treatment.

Keywords: digital twin (DT), artificial intelligence (AI), precision medicine, healthcare, big data


The Digital Human Model | Cluster of Excellence SimTech Stuttgart Center for Simulation Science


Human Digital Twin and Modeling Guidebook 2022

Human Digital Twin and Modeling Guidebook

Date: December 19, 2022


“Further, specific products within a product family can be designed and built with slight differences to enable specific mission sets, placing further stress on logistics for sustainment of these products.

Digital twin technology has been developed in recent years to address the problems which arise from product and environmental differences (Tuegel et al.,2011). The digital twin concept includes constructing a digital representation or model of an individual product to improve the accuracy of maintenance and performance predictions for individual products (Kobryn, 2020). Thus, a digital twin has been described as the model of a component, product or system developed by a collection of engineering, operational, and behavioral data which support executable models, where the models evolve over the lifecycle of the system and support the derivation of solutions which assist the real time optimization of the system or service (Boschert & Rosen, 2016).

Recently, this term has been extended to humans using the term “human digital twin”. This term has been applied in diverse fields, including medicine (Chakshu et al., 2019; Corral-Acero et al., 2020; Hirschvogel et al., 2019; Y. Liu et al., 2019; Lutze, 2020), sports performance (Barricelli et al., 2020), manufacturing ergonomics (Caputo et al., 2019; Greco et al., 2020;Sharotry et al., 2020), and product design (Constantinescu et al., 2019; Demirel et al., 2021).

Although the human digital twin concept may be analogous to digital twins of products, there are distinct differences, including increased underlying variability between humans and the fact that humans often employ products to achieve their goals. “

From Sabrina “Digital Twin” HBC and HbondingC Aug 6 2023

Your digital twin is the duplicate that they have of you that they can make the database change to instantly.
They have taken our bodies and privately put them on a cyber physical backbone (4th industrial revolution) through biosensors and graphenation.  Lets take a look at their plans for us using digital twins. You see the cyber physical backbone in the interaction layer, theres things there’s humans, theres your digital twin and then up top you see they are messing around with simulations, before we get the signal in.

Smart Cities World
The digital twin computing platform architecture

Human Digital Twins: Creating New Value Beyond the Constraints of the Real World

Digital Twin Computing is one of the three major technical fields in the IOWN initiative, along with the All-Photonics Network and the Cognitive Foundation. Here we explain the meaning of Digital Twin Computing and introduce one of its key features, “Human Digital Twins,” as well as discussing what Human Digital Twins can be used to achieve in the future.

What is “Digital Twin Computing?”

So what is “Digital Twin Computing?” Digital Twin Computing initially starts with the extremely accurate reproduction of real-world human beings and things in cyber space. A virtual being reproduced in cyber space is known as a “Digital Twin.” In fact, Digital Twins are already being developed for a variety of applications in different industrial sectors, such as in the automobile sector, for autonomous driving; the robotic control sector; and the medical sector.

Digital Twin Computing aims to develop these further and create common processes that allow diverse Digital Twins to be used across sectors.

Digital Twins are electronic data, which means they can be duplicated, merged and exchanged. Digital Twin Computing is the concept of freely mixing various Digital Twins to produce large-scale, highly accurate future predictions that take complex conditions into account. In the future, Digital Twin Computing is expected to make significant contributions to the development of things like autonomous social systems, future urban design, expansions in human capability, and automated decision-making.