创建PoolThread

Zygote创建进程之后通过app_main::onZygoteInit函数初始化ProcessState,这个类代表进程且是一个单例对象,整个进程只有一个,接下来调用函数ProcessState::startThreadPool创建线程池

void ProcessState::startThreadPool()
{
    if (!mThreadPoolStarted) {
        mThreadPoolStarted = true;
        if (mSpawnThreadOnStart) {
            spawnPooledThread(true);
        }
    }
}
void ProcessState::spawnPooledThread(bool isMain)
{
    if (mThreadPoolStarted) {
        String8 name = makeBinderThreadName();
        ALOGV("Spawning new pooled thread, name=%s\n", name.c_str());
        sp<Thread> t = new PoolThread(isMain);
        t->run(name.c_str());
    }
}

可以看到线程池的创建条件,表示线程池是唯一的,不允许重复创建, isMain=true表示线程池的主线程,进程第一次启动时只创建一个线程。

创建IPCThreadState

启动主线程,同时创建IPCThreadState对象,上面的t->run(name.c_str());会触发线程执行下边的函数

    virtual bool threadLoop()
    {
        IPCThreadState::self()->joinThreadPool(mIsMain);
        return false;
    }

IPCThreadState::joinThreadPool就已经开始运行在binder线程里边了,然后立马和binder驱动建立连接

void IPCThreadState::joinThreadPool(bool isMain)
{
    //1
    mOut.writeInt32(isMain ? BC_ENTER_LOOPER : BC_REGISTER_LOOPER);

    do {
        //2
        result = getAndExecuteCommand();
    } while (result != -ECONNREFUSED && result != -EBADF);
  1. joinThreadPool让整个线程陷入无限循环,首先给mOut写入BC_ENTER_LOOPER 指令,IPCThreadState设置了两个关键变量,mOut和mIn,mOut保存的是往binder驱动发送的数据,mIn保存的从binder驱动收到的数据。

  2. 调用getAndExecuteCommand从binder驱动读取数据和处理指令

Binder线程进入binder驱动

status_t IPCThreadState::getAndExecuteCommand()
{
    status_t result;
    int32_t cmd;

    result = talkWithDriver();
    if (result >= NO_ERROR) {
        cmd = mIn.readInt32();
        result = executeCommand(cmd);
     }
}

getAndExecuteCommand 不断调用 talkWithDriver 读取从 Binder 驱动传输过来的数据,然后调用 executeCommand 函数解析并处理

真正和binder交互的是talkWithDriver函数

status_t IPCThreadState::talkWithDriver(bool doReceive)
{
    //1
    binder_write_read bwr;
    //2
    const bool needRead = mIn.dataPosition() >= mIn.dataSize();
    //3
    const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;

    bwr.write_size = outAvail;
    bwr.write_buffer = (uintptr_t)mOut.data();
    //4
    if (doReceive && needRead) {
        bwr.read_size = mIn.dataCapacity();
        bwr.read_buffer = (uintptr_t)mIn.data();
    }
    ...
//5
    if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)
        err = NO_ERROR;
    else
        err = -errno;
  1. 传递给binder驱动的变量

  2. mIn默认没有数据所以大小为0,所以needRead为true

  3. doReceive=true,needRead=true所以这里的outAvail是mOut的数据大小

  4. 两个条件同时为true,所以read_size和write_size都大于0

  5. ioctl传递语义BINDER_WRITE_READ,对应的驱动函数是 static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg),这里陷入内核,线程会进入休眠,直到binder驱动有数据传递唤醒这个binder线程。

Binder线程休眠与唤醒

ioctl进入驱动后,我们来分析内核是如何让线程休眠与唤醒的,下面是驱动函数binder.c

static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
    ...
    switch (cmd) {
    case BINDER_WRITE_READ:
       ret = binder_ioctl_write_read(filp, arg, thread);
       if (ret)
          goto err;
       break;
    ...
}

static int binder_ioctl_write_read(struct file *filp, unsigned long arg,
				struct binder_thread *thread)
{
    ...
	if (bwr.write_size > 0) {
		ret = binder_thread_write(proc, thread,
					  bwr.write_buffer,
					  bwr.write_size,
					  &bwr.write_consumed);
	}
	if (bwr.read_size > 0) {
		ret = binder_thread_read(proc, thread, bwr.read_buffer,
					 bwr.read_size,
					 &bwr.read_consumed,
					 filp->f_flags & O_NONBLOCK);
	    ...
	}
	

在内核驱动binder_ioctl函数中,根据talkWithDriver传入的语义:BINDER_WRITE_READ,接着调用binder_ioctl_write_read函数,这里会判断bwr.write_size和bwr.read_size的大小,根据上面对talkWithDriver的分析这两个变量都是大于0的,所以先执行binder_thread_write再执行binder_thread_read

binder_thread_write

static int binder_thread_write(struct binder_proc *proc,
          struct binder_thread *thread,
          binder_uintptr_t binder_buffer, size_t size,
          binder_size_t *consumed)
{
    uint32_t cmd;
       if (get_user(cmd, (uint32_t __user *)ptr))
          return -EFAULT;
    
       switch (cmd) {
       ...
	   case BC_ENTER_LOOPER:
			thread->looper |= BINDER_LOOPER_STATE_ENTERED;
			break
       ...
}

在binder_thread_write中,首先读取到cmd指令,cmd值就是在joinThreadPool函数中向mOut写入的BC_ENTER_LOOPER ,这里的作用就是告诉驱动binder主线程已经准备就绪。

binder_thread_read


static int binder_thread_read(struct binder_proc *proc,
                struct binder_thread *thread,
                binder_uintptr_t binder_buffer, size_t size,
                binder_size_t *consumed, int non_block)
{

retry:
    //1
	wait_for_proc_work = binder_available_for_proc_work_ilocked(thread);

	thread->looper |= BINDER_LOOPER_STATE_WAITING;

	if (wait_for_proc_work) {
		    //2
			wait_event_interruptible(binder_user_error_wait,
						 binder_stop_on_user_error < 2);
	}

    thread->looper &= ~BINDER_LOOPER_STATE_WAITING;
  1. 检查thread的todo队列是否为空, 函数里边的条件是 !thread->transaction_stack && list_empty(&thread->todo)

  2. 如果队列为空,表示没有跨进程的Binder事务需要处理,所以调用wait_event_interruptible休眠,直到todo队列里边有事务需要处理,此时线程将被唤醒,后续将会解析来自客户端进程的binder数据

我们来看看什么时候触发唤醒

在binder_thread_write函数中有这样一个调用链

binder_thread_write()
   binder_transaction()
      binder_proc_transaction()
         binder_wakeup_thread_ilocked()

static void binder_wakeup_thread_ilocked(struct binder_proc *proc,
					 struct binder_thread *thread,
					 bool sync)
{
    wake_up_interruptible(&thread->wait);

}

在客户端进程中,发起binder事务,到内核中会触发上面的调用,发起线程会找到目标进程的线程,并且准备好需要的binder_transaction数据添加到目标线程的todo队列里边,然后调用wake_up_interruptible,这样就能唤醒目标线程开始处理客户端发来的请求了。

线程池扩容

子线程扩容机制

如果主线程正在处理一个binder请求,新来一个binder请求,binder驱动发现目标进程没有多余的binder线程用了,于是给目标线程发送指令:BR_SPAWN_LOOPER,在主线程中函数executeCommand将执行到下面的分支

status_t IPCThreadState::executeCommand(int32_t cmd)
{
    ...

    case BR_SPAWN_LOOPER:
        mProcess->spawnPooledThread(false);
        break;


    return result;
}

通过ProcessState::spawnPooledThread创建新的线程,传入false代表非主线程,这里我们回到了上面分析过的spawnPooledThread函数

void IPCThreadState::joinThreadPool(bool isMain)
{
    ...
    mOut.writeInt32(isMain ? BC_ENTER_LOOPER : BC_REGISTER_LOOPER);
    mIsLooper = true;
    do {
        result = getAndExecuteCommand();
    ...
}

在扩容的子线程中向binder驱动发起连接时,传入的cmd就是BC_REGISTER_LOOPER,内核就知道了binder线程增加了

static int binder_thread_write(struct binder_proc *proc,
          struct binder_thread *thread,
          binder_uintptr_t binder_buffer, size_t size,
          binder_size_t *consumed)
{
    uint32_t cmd;
       if (get_user(cmd, (uint32_t __user *)ptr))
          return -EFAULT;
    
       switch (cmd) {
		case BC_REGISTER_LOOPER:
             ...
			proc->requested_threads--;
			proc->requested_threads_started++;
			thread->looper |= BINDER_LOOPER_STATE_REGISTERED;
			break;
         ...
}

子线程结束机制

子线程在处理完事务或者超时后会自动结束

void IPCThreadState::joinThreadPool(bool isMain)
{
  
    do {
        ...
        // Let this thread exit the thread pool if it is no longer
        // needed and it is not the main process thread.
        if(result == TIMED_OUT && !isMain) {
            break;
        }
    } while (result != -ECONNREFUSED && result != -EBADF);

    mOut.writeInt32(BC_EXIT_LOOPER);
    mIsLooper = false;
    talkWithDriver(false);
}

子线程函数joinThreadPool的while循环中判断如果非主线程超时将退出循环,同时向binder驱动发送BC_EXIT_LOOPER指令,整个线程结束运行。

Zygote单线程的原因之一

从上面创建binder线程池的函数startThreadPool可以看到使用了mThreadPoolStarted标记位

void ProcessState::startThreadPool()
{
    if (!mThreadPoolStarted) {
        mThreadPoolStarted = true;
        if (mSpawnThreadOnStart) {
            spawnPooledThread(true);
        }
    }
}

假如Zygote进程是一个多线程的进程,比如它也调用startThreadPool创建自己的binder线程池,那么在fork出子进程之后,子进程继承父进程的内存空间,包括单例ProcessState的状态,那么子进程此时看到的mThreadPoolStarted=true,将导致子进程没法初始化自己的binder线程池。

春风花气馥,秋月寒江湛