手写一个Java Android Binder服务及源码分析
前言
上一篇文章手写一个C++ Android Binder服务及源码分析,我们详细讲解了如何用C++编写自己的Binder服务,并且对C++编写的binder服务源码进行分析,相信通过上篇文章我们已经可以很轻松的使用c++编写自己的binder服务,并对其实现机制有了深入了解。
本篇在讲解如何使用java语言编写自己的binder服务,并通过对代码的分析,深入了解java binder服务的实现机制。
其实java编写的binder服务本质是对C++ binder接口的封装,而C++编写的binder服务实质是对C语言binder接口的封装,而C编写的binder服务是对linux内核中binder驱动的调用,都是一环套一环,编程语言越高级,封装就越深,开发人员使用就越便捷。因此,建议在阅读本篇文章之前,先阅读手写一个C++ Android Binder服务及源码分析,这有利于你对binder服务的深入理解。
如果不满足于仅仅会使用java语言和C++语言编写自己的binder服务,而是想深入内核理解android binder,推荐继续阅读以下系列文章:
深入内核讲明白Android Binder【一】
深入内核讲明白Android Binder【二】
深入内核讲明白Android Binder【三】
android binder是android中各个服务进程进行沟通的桥梁,如果想要阅读android源码,binder是不可或缺的一个技术点,掌握binder技术原理,对于我们android源码的阅读和理解至关重要,这也是我断断续续花半年的时间,也一定要啃下binder源码的原因。后续我会输出更多android系统源码的文章,让我们一起成长为一个不仅仅停留在写android APP层面的程序员,而是成为一名可以审视/优化google源码的工程师,这条路很艰辛,但我们一起加油!
下面开始介绍binder系列文章的最后一篇文章。
一、Java语言编写自己的Binder服务Demo
1. binder服务demo功能介绍
我们继续采用上篇文章手写一个C++ Android Binder服务及源码分析中C++语言实现的binder服务功能,Java实现的binder服务也提供sayhello和sayhello_to两个函数供客户端调用。
2. binder服务demo代码结构图
- IHelloService.aidl文件用于声明服务接口
- IHelloService.java由IHelloService.aidl自动生成,其内容自动生成了服务端的代理类IHelloService.Stub.Proxy,客户端只需要通过Proxy类即可完成对服务端的跨进程调用
- HelloService.java继承IHelloService.Stub类,完成服务的本地实现
- TestServer.java 服务端的代码实现
- TestClient.java客户端的代码实现
3. binder服务demo代码实现
3.1 IHelloService.aidl
/** {@hide} */
interface IHelloService
{
void sayhello();
int sayhello_to(String name);
}
系统如何由aidl文件自动生成java类?
- aidl文件放在源代码目录:https://github.com/CyanogenMod/android_frameworks_base/tree/cm-14.1/core/java/android/os
- 在源代码目录添加aidl文件的路径:>https://github.com/CyanogenMod/android_frameworks_base/blob/cm-14.1/Android.mk
- mmm frameworks/bash
3.2 IHelloService.java(自动生成)
自动生成的java代码包含以下三个核心内容:
- IHelloService接口定义了sayhello,sayhello_to两个服务函数
- IHelloService.Stub抽象类实现了IHelloService接口,但并没有实现sayhello,sayhello_to这两个服务函数,因此这两个服务函数交由IHelloService.Stub的派生类HelloService实现,java的HelloService类的作用等同于C++中的BnHelloServie.cpp的功能
- IHelloService.Stub.Proxy类作为服务端的代理类,完全自动生成,用于被客户端直接调用,Proxy类实现了IHelloService接口的sayhello,sayhello_to两个函数,Proxy类等同于C++中的BpHelloService.cpp的功能
/*
* This file is auto-generated. DO NOT MODIFY.
* Original file: frameworks/base/core/java/android/os/IHelloService.aidl
*/
/** {@hide} */
public interface IHelloService extends android.os.IInterface
{
/** Local-side IPC implementation stub class. */
//抽象类Stub并没有实现IHelloService接口中的sayhello及sayhello_to函数,交给Stub的派生类去实现,派生类等同于C++中的BnHelloServie.cpp的功能
public static abstract class Stub extends android.os.Binder implements IHelloService
{
private static final java.lang.String DESCRIPTOR = "IHelloService";
/** Construct the stub at attach it to the interface. */
public Stub()
{
this.attachInterface(this, DESCRIPTOR);
}
/**
* Cast an IBinder object into an IHelloService interface,
* generating a proxy if needed.
*/
public static IHelloService asInterface(android.os.IBinder obj)
{
if ((obj==null)) {
return null;
}
android.os.IInterface iin = obj.queryLocalInterface(DESCRIPTOR);
if (((iin!=null)&&(iin instanceof IHelloService))) {
return ((IHelloService)iin);
}
return new IHelloService.Stub.Proxy(obj);
}
@Override public android.os.IBinder asBinder()
{
return this;
}
//服务端根据不同的code,调用不同的服务函数
@Override public boolean onTransact(int code, android.os.Parcel data, android.os.Parcel reply, int flags) throws android.os.RemoteException
{
switch (code)
{
case INTERFACE_TRANSACTION:
{
reply.writeString(DESCRIPTOR);
return true;
}
case TRANSACTION_sayhello:
{
data.enforceInterface(DESCRIPTOR);
this.sayhello();
reply.writeNoException();
return true;
}
case TRANSACTION_sayhello_to:
{
data.enforceInterface(DESCRIPTOR);
java.lang.String _arg0;
_arg0 = data.readString();
int _result = this.sayhello_to(_arg0);
reply.writeNoException();
reply.writeInt(_result);
return true;
}
}
return super.onTransact(code, data, reply, flags);
}
//代理类Proxy,用于被客户端直接调用,等同于C++中的BpHelloService.cpp的功能
private static class Proxy implements IHelloService
{
private android.os.IBinder mRemote;
Proxy(android.os.IBinder remote)
{
mRemote = remote;
}
@Override public android.os.IBinder asBinder()
{
return mRemote;
}
public java.lang.String getInterfaceDescriptor()
{
return DESCRIPTOR;
}
@Override public void sayhello() throws android.os.RemoteException
{
android.os.Parcel _data = android.os.Parcel.obtain();
android.os.Parcel _reply = android.os.Parcel.obtain();
try {
_data.writeInterfaceToken(DESCRIPTOR);
mRemote.transact(Stub.TRANSACTION_sayhello, _data, _reply, 0);
_reply.readException();
}
finally {
_reply.recycle();
_data.recycle();
}
}
@Override public int sayhello_to(java.lang.String name) throws android.os.RemoteException
{
android.os.Parcel _data = android.os.Parcel.obtain();
android.os.Parcel _reply = android.os.Parcel.obtain();
int _result;
try {
_data.writeInterfaceToken(DESCRIPTOR);
_data.writeString(name);
mRemote.transact(Stub.TRANSACTION_sayhello_to, _data, _reply, 0);
_reply.readException();
_result = _reply.readInt();
}
finally {
_reply.recycle();
_data.recycle();
}
return _result;
}
}
static final int TRANSACTION_sayhello = (android.os.IBinder.FIRST_CALL_TRANSACTION + 0);
static final int TRANSACTION_sayhello_to = (android.os.IBinder.FIRST_CALL_TRANSACTION + 1);
}
public void sayhello() throws android.os.RemoteException;
public int sayhello_to(java.lang.String name) throws android.os.RemoteException;
3.3 HelloService.java
继承自抽象类IHelloService.Stub,用于完成服务所提供函数的具体实现
import android.util.Slog;
public class HelloService extends IHelloService.Stub {
private static final String TAG = "HelloService";
private int cnt1 = 0;
private int cnt2 = 0;
public void sayhello() throws android.os.RemoteException {
cnt1++;
Slog.i(TAG, "sayhello : cnt = "+cnt1);
}
public int sayhello_to(java.lang.String name) throws android.os.RemoteException {
cnt2++;
Slog.i(TAG, "sayhello_to "+name+" : cnt = "+cnt2);
return cnt2;
}
}
3.4 TestServer.java
binder服务端代码实现,本质过程还是添加服务,循环等待客户端数据,只是java中循环等待的过程交给了app_process启动的binder子线程完成。
import android.util.Slog;
import android.os.ServiceManager;
/* 1. addService
* 2. while(true) { read data, parse data, call function, reply }
*/
public class TestServer {
private static final String TAG = "TestServer";
public static void main(String args[])
{
/* add Service */
Slog.i(TAG, "add hello service");
ServiceManager.addService("hello", new HelloService());
//只需要让主线程活着就可以了。
//循环读数据,解析数据,调用函数的过程交由app_process启动程序时,创建的两个binder子线程完成
while (true)
{
try {
Thread.sleep(100);
} catch (Exception e){}
}
}
}
3.5 TestClient.java
binder客户端代码实现,本质过程就是获取服务,调用服务函数。
mport android.util.Slog;
import android.os.ServiceManager;
import android.os.IBinder;
/* 1. getService
* 2. 调用服务的sayhello,sayhello_to
*
*/
/* test_client <hello|goodbye> [name] */
public class TestClient {
private static final String TAG = "TestClient";
public static void main(String args[])
{
if (args.length == 0)
{
System.out.println("Usage: need parameter: <hello|goodbye> [name]");
return;
}
if (args[0].equals("hello"))
{
/* 1. getService */
IBinder binder = ServiceManager.getService("hello");
if (binder == null)
{
System.out.println("can not get hello service");
Slog.i(TAG, "can not get hello service");
return;
}
IHelloService svr = IHelloService.Stub.asInterface(binder);
if (args.length == 1)
{
try {
svr.sayhello();
System.out.println("call sayhello");
Slog.i(TAG, "call sayhello");
} catch (Exception e) {}
}
else
{
try {
int cnt = svr.sayhello_to(args[1]);
System.out.println("call sayhello_to "+args[1]+" : cnt = "+cnt);
Slog.i(TAG, "call sayhello_to "+args[1]+" : cnt = "+cnt);
} catch (Exception e) {
System.out.println("call sayhello_to , err :"+e);
Slog.i(TAG, "call sayhello_to , err : "+e);
}
}
}
}
}
3.6 编译binder服务的Android.mk文件
# Copyright 2008 The Android Open Source Project
#
LOCAL_PATH:= $(call my-dir)
include $(CLEAR_VARS)
LOCAL_SRC_FILES := HelloService.java IHelloService.java TestServer.java
LOCAL_MODULE := TestServer
include $(BUILD_JAVA_LIBRARY)
include $(CLEAR_VARS)
LOCAL_SRC_FILES := HelloService.java IHelloService.java TestClient.java
LOCAL_MODULE := TestClient
include $(BUILD_JAVA_LIBRARY)
二、C++编写的Binder服务Demo源码解析
上面我们讲解了如何用java语言写自己的android binder服务,下面我们深入分析java实现的binder的服务的内部逻辑,我们知道java最终会调用到C++的封装,而C++的binder实现我们上一篇文章手写一个C++ Android Binder服务及源码分析,已经分析过了,所以,这里的源码解析就解析到C++层面。
1. client端发送数据的实现机制
1.1 添加服务源码分析
添加服务addService是通过ServiceManagerProxy对象中的addService函数实现的,而最终是通过mRemote.transact向servicemanager发送数据的。
public class TestServer {
private static final String TAG = "TestServer";
public static void main(String args[])
{
/* add Service */
Slog.i(TAG, "add hello service");
ServiceManager.addService("hello", new HelloService());
//只需要让主线程活着就可以了。
//循环读数据,解析数据,调用函数的过程交由app_process启动程序时,创建的两个binder子线程完成
while (true)
{
try {
Thread.sleep(100);
} catch (Exception e){}
}
}
}
public static void addService(String name, IBinder service) {
try {
getIServiceManager().addService(name, service, false);
} catch (RemoteException e) {
Log.e(TAG, "error in addService", e);
}
}
private static IServiceManager getIServiceManager() {
if (sServiceManager != null) {
return sServiceManager;
}
// Find the service manager
sServiceManager = ServiceManagerNative
.asInterface(Binder.allowBlocking(BinderInternal.getContextObject()));
return sServiceManager;
}
/**
* Cast a Binder object into a service manager interface, generating
* a proxy if needed.
*/
static public IServiceManager asInterface(IBinder obj)
{
if (obj == null) {
return null;
}
IServiceManager in =
(IServiceManager)obj.queryLocalInterface(descriptor);
if (in != null) {
return in;
}
return new ServiceManagerProxy(obj);
}
// addService是通过ServiceManagerProxy对象中的addService函数实现的
//源码地址:https://github.com/CyanogenMod/android_frameworks_base/blob/cm-14.1/core/java/android/os/ServiceManagerNative.java#L109C7-L109C26
public void addService(String name, IBinder service, boolean allowIsolated)
throws RemoteException {
Parcel data = Parcel.obtain();
Parcel reply = Parcel.obtain();
data.writeInterfaceToken(IServiceManager.descriptor);
data.writeString(name);
data.writeStrongBinder(service);
data.writeInt(allowIsolated ? 1 : 0);
mRemote.transact(ADD_SERVICE_TRANSACTION, data, reply, 0);
reply.recycle();
data.recycle();
}
1.2 获取服务源码分析
客户端获取服务getService是通过ServiceManagerProxy对象中的getService函数实现的,而最终也是通过mRemote.transact向servicemanager发送数据的。
public class TestClient {
private static final String TAG = "TestClient";
public static void main(String args[])
{
if (args.length == 0)
{
System.out.println("Usage: need parameter: <hello|goodbye> [name]");
return;
}
if (args[0].equals("hello"))
{
/* 1. getService */
IBinder binder = ServiceManager.getService("hello");
if (binder == null)
{
System.out.println("can not get hello service");
Slog.i(TAG, "can not get hello service");
return;
}
IHelloService svr = IHelloService.Stub.asInterface(binder);
if (args.length == 1)
{
try {
svr.sayhello();
System.out.println("call sayhello");
Slog.i(TAG, "call sayhello");
} catch (Exception e) {}
}
else
{
try {
int cnt = svr.sayhello_to(args[1]);
System.out.println("call sayhello_to "+args[1]+" : cnt = "+cnt);
Slog.i(TAG, "call sayhello_to "+args[1]+" : cnt = "+cnt);
} catch (Exception e) {
System.out.println("call sayhello_to , err :"+e);
Slog.i(TAG, "call sayhello_to , err : "+e);
}
}
}
}
}
/**
* Returns a reference to a service with the given name.
*
* @param name the name of the service to get
* @return a reference to the service, or <code>null</code> if the service doesn't exist
* @hide
*/
@UnsupportedAppUsage
public static IBinder getService(String name) {
try {
IBinder service = sCache.get(name);
if (service != null) {
return service;
} else {
return Binder.allowBlocking(rawGetService(name));
}
} catch (RemoteException e) {
Log.e(TAG, "error in getService", e);
}
return null;
}
private static IBinder rawGetService(String name) throws RemoteException {
final long start = sStatLogger.getTime();
final IBinder binder = getIServiceManager().getService(name);
......
return binder;
}
}
private static IServiceManager getIServiceManager() {
if (sServiceManager != null) {
return sServiceManager;
}
// Find the service manager
sServiceManager = ServiceManagerNative
.asInterface(Binder.allowBlocking(BinderInternal.getContextObject()));
return sServiceManager;
}
/**
* Cast a Binder object into a service manager interface, generating
* a proxy if needed.
*/
static public IServiceManager asInterface(IBinder obj)
{
if (obj == null) {
return null;
}
IServiceManager in =
(IServiceManager)obj.queryLocalInterface(descriptor);
if (in != null) {
return in;
}
return new ServiceManagerProxy(obj);
}
// getService是通过ServiceManagerProxy对象中的getService函数实现的
//源码地址:https://github.com/CyanogenMod/android_frameworks_base/blob/cm-14.1/core/java/android/os/ServiceManagerNative.java#L109C7-L109C26
public IBinder getService(String name) throws RemoteException {
Parcel data = Parcel.obtain();
Parcel reply = Parcel.obtain();
data.writeInterfaceToken(IServiceManager.descriptor);
data.writeString(name);
mRemote.transact(GET_SERVICE_TRANSACTION, data, reply, 0);
IBinder binder = reply.readStrongBinder();
reply.recycle();
data.recycle();
return binder;
}
1.3 client端执行服务函数源码分析
client端使用服务是通过调用IHelloService.Stub.Proxy类中方法实现的,而最终也是通过mRemote.transact向服务端发送数据的。
public class TestClient {
private static final String TAG = "TestClient";
public static void main(String args[])
{
if (args.length == 0)
{
System.out.println("Usage: need parameter: <hello|goodbye> [name]");
return;
}
if (args[0].equals("hello"))
{
/* 1. getService */
IBinder binder = ServiceManager.getService("hello");
if (binder == null)
{
System.out.println("can not get hello service");
Slog.i(TAG, "can not get hello service");
return;
}
// 获得服务的代理类,IHelloService.Stub.Proxy对象
IHelloService svr = IHelloService.Stub.asInterface(binder);
if (args.length == 1)
{
try {
// 执行服务函数
svr.sayhello();
System.out.println("call sayhello");
Slog.i(TAG, "call sayhello");
} catch (Exception e) {}
}
else
{
try {
// 执行服务函数
int cnt = svr.sayhello_to(args[1]);
System.out.println("call sayhello_to "+args[1]+" : cnt = "+cnt);
Slog.i(TAG, "call sayhello_to "+args[1]+" : cnt = "+cnt);
} catch (Exception e) {
System.out.println("call sayhello_to , err :"+e);
Slog.i(TAG, "call sayhello_to , err : "+e);
}
}
}
}
}
@Override public void sayhello() throws android.os.RemoteException
{
android.os.Parcel _data = android.os.Parcel.obtain();
android.os.Parcel _reply = android.os.Parcel.obtain();
try {
_data.writeInterfaceToken(DESCRIPTOR);
// 向服务端发送数据
mRemote.transact(Stub.TRANSACTION_sayhello, _data, _reply, 0);
_reply.readException();
}
finally {
_reply.recycle();
_data.recycle();
}
}
@Override public int sayhello_to(java.lang.String name) throws android.os.RemoteException
{
android.os.Parcel _data = android.os.Parcel.obtain();
android.os.Parcel _reply = android.os.Parcel.obtain();
int _result;
try {
_data.writeInterfaceToken(DESCRIPTOR);
_data.writeString(name);
// 向服务端发送数据
mRemote.transact(Stub.TRANSACTION_sayhello_to, _data, _reply, 0);
_reply.readException();
_result = _reply.readInt();
}
finally {
_reply.recycle();
_data.recycle();
}
return _result;
}
由上面的分析可以得知java端跨进程发送数据都是通过代理类中的mRemote.transact函数实现,那么servicemanager的代理ServiceManagerProxy中mRemote和HelloService的代理类IHelloService.Stub.Proxy中mRemote到底是什么呢?我们下面接着分析。
1.4 ServiceManagerProxy中mRemote的构造源码分析
- ServiceManager.addService添加服务,构造ServiceManagerProxy对象时传入的参数就是mRemote
- 该参数由BinderInternal.getContextObject()方法获得,该方法是一个native方法,即通过JNI才可以调用的由C++实现的方法,该C++函数最终得到一个Java BinderProxy对象,其中mObject指向new BpBinder(0),BpBinder中的mHandle属性等于0,代表目的进程,即servicemanager进程。
因此,mRemote是一个由C++程序创建的Java BinderProxy对象,其中mObject指向new BpBinder(0)
public class TestServer {
private static final String TAG = "TestServer";
public static void main(String args[])
{
/* add Service */
Slog.i(TAG, "add hello service");
ServiceManager.addService("hello", new HelloService());
while (true)
{
try {
Thread.sleep(100);
} catch (Exception e){}
}
}
}
public static void addService(String name, IBinder service) {
try {
getIServiceManager().addService(name, service, false);
} catch (RemoteException e) {
Log.e(TAG, "error in addService", e);
}
}
private static IServiceManager getIServiceManager() {
if (sServiceManager != null) {
return sServiceManager;
}
// Find the service manager
//BinderInternal.getContextObject()就是mRemote
sServiceManager = ServiceManagerNative.asInterface(BinderInternal.getContextObject());
return sServiceManager;
}
static public IServiceManager asInterface(IBinder obj)
{
if (obj == null) {
return null;
}
IServiceManager in =
(IServiceManager)obj.queryLocalInterface(descriptor);
if (in != null) {
return in;
}
return new ServiceManagerProxy(obj);
}
public ServiceManagerProxy(IBinder remote) {
mRemote = remote;
}
/**
* Return the global "context object" of the system. This is usually
* an implementation of IServiceManager, which you can use to find
* other services.
*/
// C或C++函数,通过JNI调用。具体的实现代码,在source insight工具中可以通过ctrl + /进行搜索,找到这个JNI定义的地方
//该函数最终得到一个Java BinderProxy对象,其中mObject指向new BpBinder(0)
public static final native IBinder getContextObject();
//android_util_Binder.cpp
static const JNINativeMethod gBinderInternalMethods[] = {
/* name, signature, funcPtr */
{ "getContextObject", "()Landroid/os/IBinder;", (void*)android_os_BinderInternal_getContextObject },
{ "joinThreadPool", "()V", (void*)android_os_BinderInternal_joinThreadPool },
{ "disableBackgroundScheduling", "(Z)V", (void*)android_os_BinderInternal_disableBackgroundScheduling },
{ "setMaxThreads", "(I)V", (void*)android_os_BinderInternal_setMaxThreads },
{ "handleGc", "()V", (void*)android_os_BinderInternal_handleGc }
};
static jobject android_os_BinderInternal_getContextObject(JNIEnv* env, jobject clazz)
{
sp<IBinder> b = ProcessState::self()->getContextObject(NULL);
return javaObjectForIBinder(env, b); //b = new BpBinder(0)
}
//到这儿,就和C++的实现接上了
sp<IBinder> ProcessState::getContextObject(const sp<IBinder>& /*caller*/)
{
return getStrongProxyForHandle(0);
}
sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle)
{
sp<IBinder> result;
AutoMutex _l(mLock);
handle_entry* e = lookupHandleLocked(handle);
if (e != NULL) {
// We need to create a new BpBinder if there isn't currently one, OR we
// are unable to acquire a weak reference on this current one. See comment
// in getWeakProxyForHandle() for more info about this.
IBinder* b = e->binder;
if (b == NULL || !e->refs->attemptIncWeak(this)) {
if (handle == 0) {
// Special case for context manager...
// The context manager is the only object for which we create
// a BpBinder proxy without already holding a reference.
// Perform a dummy transaction to ensure the context manager
// is registered before we create the first local reference
// to it (which will occur when creating the BpBinder).
// If a local reference is created for the BpBinder when the
// context manager is not present, the driver will fail to
// provide a reference to the context manager, but the
// driver API does not return status.
//
// Note that this is not race-free if the context manager
// dies while this code runs.
//
// TODO: add a driver API to wait for context manager, or
// stop special casing handle 0 for context manager and add
// a driver API to get a handle to the context manager with
// proper reference counting.
Parcel data;
status_t status = IPCThreadState::self()->transact(
0, IBinder::PING_TRANSACTION, data, NULL, 0);
if (status == DEAD_OBJECT)
return NULL;
}
b = new BpBinder(handle); //mHandle = 0
e->binder = b;
if (b) e->refs = b->getWeakRefs();
result = b;
} else {
// This little bit of nastyness is to allow us to add a primary
// reference to the remote proxy when this team doesn't have one
// but another team is sending the handle to us.
result.force_set(b);
e->refs->decWeak(this);
}
}
return result;
}
jobject javaObjectForIBinder(JNIEnv* env, const sp<IBinder>& val)
{
if (val == NULL) return NULL;
if (val->checkSubclass(&gBinderOffsets)) {
// One of our own!
jobject object = static_cast<JavaBBinder*>(val.get())->object();
LOGDEATH("objectForBinder %p: it's our own %p!\n", val.get(), object);
return object;
}
// For the rest of the function we will hold this lock, to serialize
// looking/creation/destruction of Java proxies for native Binder proxies.
AutoMutex _l(mProxyLock);
// Someone else's... do we know about it?
jobject object = (jobject)val->findObject(&gBinderProxyOffsets);
if (object != NULL) {
jobject res = jniGetReferent(env, object);
if (res != NULL) {
ALOGV("objectForBinder %p: found existing %p!\n", val.get(), res);
return res;
}
LOGDEATH("Proxy object %p of IBinder %p no longer in working set!!!", object, val.get());
android_atomic_dec(&gNumProxyRefs);
val->detachObject(&gBinderProxyOffsets);
env->DeleteGlobalRef(object);
}
//使用C++代码来创建java BinderProxy对象
object = env->NewObject(gBinderProxyOffsets.mClass, gBinderProxyOffsets.mConstructor);
if (object != NULL) {
LOGDEATH("objectForBinder %p: created new proxy %p !\n", val.get(), object);
// The proxy holds a reference to the native object.
//设置BinderProxy对象中的object = val.get = new BpBinder(0)
env->SetLongField(object, gBinderProxyOffsets.mObject, (jlong)val.get());
val->incStrong((void*)javaObjectForIBinder);
// The native object needs to hold a weak reference back to the
// proxy, so we can retrieve the same proxy if it is still active.
jobject refObject = env->NewGlobalRef(
env->GetObjectField(object, gBinderProxyOffsets.mSelf));
val->attachObject(&gBinderProxyOffsets, refObject,
jnienv_to_javavm(env), proxy_cleanup);
// Also remember the death recipients registered on this proxy
sp<DeathRecipientList> drl = new DeathRecipientList;
drl->incStrong((void*)javaObjectForIBinder);
env->SetLongField(object, gBinderProxyOffsets.mOrgue, reinterpret_cast<jlong>(drl.get()));
// Note that a new object reference has been created.
android_atomic_inc(&gNumProxyRefs);
incRefsCreated(env);
}
return object;
}
const char* const kBinderProxyPathName = "android/os/BinderProxy";
static int int_register_android_os_BinderProxy(JNIEnv* env)
{
jclass clazz = FindClassOrDie(env, "java/lang/Error");
gErrorOffsets.mClass = MakeGlobalRefOrDie(env, clazz);
clazz = FindClassOrDie(env, kBinderProxyPathName);
gBinderProxyOffsets.mClass = MakeGlobalRefOrDie(env, clazz);
//构造的就是"android/os/BinderProxy"的对象
gBinderProxyOffsets.mConstructor = GetMethodIDOrDie(env, clazz, "<init>", "()V");
......
}
1.5 IHelloService.Stub.Proxy中mRemote的构造源码分析
- 客户端获取使用服务时先调用IHelloService.Stub.asInterface(binder)获得IHelloService.Stub.Proxy对象
- 而传入Proxy构造函数的参数binder就是mRemote,即mRemote就是 ServiceManager.getService(“hello”)拿到的IBinder
- 而ServiceManager.getService最终使用JNI调用C++实现的nativeReadStrongBinder函数,该函数会创建一个java BinderProxy对象
- 而BinderProxy对象中的mObject = parcel->readStrongBinder()
- 而parcel->readStrongBinder()就是服务端的handle值。
因此,mRemote是一个由C++程序创建的Java BinderProxy对象,其中mObject指向new BpBinder(handle),handle是从service_manager中返回的对应hello服务的值。
public class TestClient {
private static final String TAG = "TestClient";
public static void main(String args[])
{
if (args.length == 0)
{
System.out.println("Usage: need parameter: <hello|goodbye> [name]");
return;
}
if (args[0].equals("hello"))
{
/* 1. getService */
IBinder binder = ServiceManager.getService("hello");
if (binder == null)
{
System.out.println("can not get hello service");
Slog.i(TAG, "can not get hello service");
return;
}
IHelloService svr = IHelloService.Stub.asInterface(binder);
if (args.length == 1)
{
try {
svr.sayhello();
System.out.println("call sayhello");
Slog.i(TAG, "call sayhello");
} catch (Exception e) {}
}
else
{
try {
int cnt = svr.sayhello_to(args[1]);
System.out.println("call sayhello_to "+args[1]+" : cnt = "+cnt);
Slog.i(TAG, "call sayhello_to "+args[1]+" : cnt = "+cnt);
} catch (Exception e) {
System.out.println("call sayhello_to , err :"+e);
Slog.i(TAG, "call sayhello_to , err : "+e);
}
}
}
}
}
/**
* Cast an IBinder object into an IHelloService interface,
* generating a proxy if needed.
*/
public static IHelloService asInterface(android.os.IBinder obj)
{
if ((obj==null)) {
return null;
}
android.os.IInterface iin = obj.queryLocalInterface(DESCRIPTOR);
if (((iin!=null)&&(iin instanceof IHelloService))) {
return ((IHelloService)iin);
}
return new IHelloService.Stub.Proxy(obj);
}
private static class Proxy implements IHelloService
{
private android.os.IBinder mRemote;
Proxy(android.os.IBinder remote)
{
//mRemote就是getService("hello")拿到的IBinder
mRemote = remote;
}
......
}
// 由上面的2可知,IBinder binder = ServiceManager.getService("hello");
// 是通过ServiceManagerProxy对象中的getService函数实现的
// 源码地址:https://github.com/CyanogenMod/android_frameworks_base/blob/cm-14.1/core/java/android/os/ServiceManagerNative.java#L109C7-L109C26
public IBinder getService(String name) throws RemoteException {
Parcel data = Parcel.obtain();
Parcel reply = Parcel.obtain();
data.writeInterfaceToken(IServiceManager.descriptor);
data.writeString(name);
mRemote.transact(GET_SERVICE_TRANSACTION, data, reply, 0);
IBinder binder = reply.readStrongBinder();//这个就是IHelloService.Stub.Proxy中mRemote
reply.recycle();
data.recycle();
return binder;
}
/**
* Read an object from the parcel at the current dataPosition().
*/
public final IBinder readStrongBinder() {
return nativeReadStrongBinder(mNativePtr);
}
private static native IBinder nativeReadStrongBinder(long nativePtr);
//android_os_Parcel.cpp
static const JNINativeMethod gParcelMethods[] = {
......
{"nativeReadStrongBinder", "(J)Landroid/os/IBinder;", (void*)android_os_Parcel_readStrongBinder}
......
}
static jobject android_os_Parcel_readStrongBinder(JNIEnv* env, jclass clazz, jlong nativePtr)
{
//把java Parcel对象转换为C++ Parcel对象
/*client程序向service_manager发出getService请求,得到一个回复reply,
它里面含有flat_binder_object,然后flat_binder_object再此处被封装成一个C++的Parcel对象
*/
Parcel* parcel = reinterpret_cast<Parcel*>(nativePtr);
if (parcel != NULL) {
//上面5已经分析过这个函数。
//它会创建一个java BinderProxy对象,而BinderProxy对象中的mObject = parcel->readStrongBinder()
return javaObjectForIBinder(env, parcel->readStrongBinder());
}
return NULL;
}
sp<IBinder> Parcel::readStrongBinder() const
{
sp<IBinder> val;
readStrongBinder(&val);
return val;
}
status_t Parcel::readStrongBinder(sp<IBinder>* val) const
{
//这个函数在C++的Binder实现中也分析过
return unflatten_binder(ProcessState::self(), *this, val);
}
status_t unflatten_binder(const sp<ProcessState>& proc,
const Parcel& in, sp<IBinder>* out)
{
const flat_binder_object* flat = in.readObject(false);
if (flat) {
switch (flat->type) {
case BINDER_TYPE_BINDER:
*out = reinterpret_cast<IBinder*>(flat->cookie);
return finish_unflatten_binder(NULL, *flat, in);
case BINDER_TYPE_HANDLE:
// 通过handle创建BpBinder对象
*out = proc->getStrongProxyForHandle(flat->handle);
return finish_unflatten_binder(
static_cast<BpBinder*>(out->get()), *flat, in);
}
}
return BAD_TYPE;
}
sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle)
{
sp<IBinder> result;
AutoMutex _l(mLock);
handle_entry* e = lookupHandleLocked(handle);
if (e != NULL) {
// We need to create a new BpBinder if there isn't currently one, OR we
// are unable to acquire a weak reference on this current one. See comment
// in getWeakProxyForHandle() for more info about this.
IBinder* b = e->binder;
if (b == NULL || !e->refs->attemptIncWeak(this)) {
if (handle == 0) {
// Special case for context manager...
// The context manager is the only object for which we create
// a BpBinder proxy without already holding a reference.
// Perform a dummy transaction to ensure the context manager
// is registered before we create the first local reference
// to it (which will occur when creating the BpBinder).
// If a local reference is created for the BpBinder when the
// context manager is not present, the driver will fail to
// provide a reference to the context manager, but the
// driver API does not return status.
//
// Note that this is not race-free if the context manager
// dies while this code runs.
//
// TODO: add a driver API to wait for context manager, or
// stop special casing handle 0 for context manager and add
// a driver API to get a handle to the context manager with
// proper reference counting.
Parcel data;
status_t status = IPCThreadState::self()->transact(
0, IBinder::PING_TRANSACTION, data, NULL, 0);
if (status == DEAD_OBJECT)
return NULL;
}
// 创建BpBinder对象
b = new BpBinder(handle);
e->binder = b;
if (b) e->refs = b->getWeakRefs();
result = b;
} else {
// This little bit of nastyness is to allow us to add a primary
// reference to the remote proxy when this team doesn't have one
// but another team is sending the handle to us.
result.force_set(b);
e->refs->decWeak(this);
}
}
return result;
}
1.6 已知mRemote是一个java BinderProxy对象,那么再分析下mRemote.transact
最终调用到C++对象BpBinder的transact函数,到这就和C++的Binder实现接上了,BpBinder的transact函数在手写一个C++ Android Binder服务及源码分析中已经详细解释,这里不再赘述。
//Binder.java
public boolean transact(int code, Parcel data, Parcel reply, int flags) throws RemoteException {
Binder.checkParcel(this, code, data, "Unreasonably large binder buffer");
if (Binder.isTracingEnabled()) { Binder.getTransactionTracker().addTrace(); }
return transactNative(code, data, reply, flags);
}
public native boolean transactNative(int code, Parcel data, Parcel reply,
int flags) throws RemoteException;
//android_util_Binder.cpp
static const JNINativeMethod gBinderProxyMethods[] = {
/* name, signature, funcPtr */
{"pingBinder", "()Z", (void*)android_os_BinderProxy_pingBinder},
{"isBinderAlive", "()Z", (void*)android_os_BinderProxy_isBinderAlive},
{"getInterfaceDescriptor", "()Ljava/lang/String;", (void*)android_os_BinderProxy_getInterfaceDescriptor},
{"transactNative", "(ILandroid/os/Parcel;Landroid/os/Parcel;I)Z", (void*)android_os_BinderProxy_transact},
{"linkToDeath", "(Landroid/os/IBinder$DeathRecipient;I)V", (void*)android_os_BinderProxy_linkToDeath},
{"unlinkToDeath", "(Landroid/os/IBinder$DeathRecipient;I)Z", (void*)android_os_BinderProxy_unlinkToDeath},
{"destroy", "()V", (void*)android_os_BinderProxy_destroy},
};
static jboolean android_os_BinderProxy_transact(JNIEnv* env, jobject obj,
jint code, jobject dataObj, jobject replyObj, jint flags) // throws RemoteException
{
if (dataObj == NULL) {
jniThrowNullPointerException(env, NULL);
return JNI_FALSE;
}
Parcel* data = parcelForJavaObject(env, dataObj);
if (data == NULL) {
return JNI_FALSE;
}
Parcel* reply = parcelForJavaObject(env, replyObj);
if (reply == NULL && replyObj != NULL) {
return JNI_FALSE;
}
//从java BinderProxy对象中取出mObject,它就是一个BpBinder对象
IBinder* target = (IBinder*)
env->GetLongField(obj, gBinderProxyOffsets.mObject);
if (target == NULL) {
jniThrowException(env, "java/lang/IllegalStateException", "Binder has been finalized!");
return JNI_FALSE;
}
ALOGV("Java code calling transact on %p in Java object %p with code %" PRId32 "\n",
target, obj, code);
bool time_binder_calls;
int64_t start_millis;
if (kEnableBinderSample) {
// Only log the binder call duration for things on the Java-level main thread.
// But if we don't
time_binder_calls = should_time_binder_calls();
if (time_binder_calls) {
start_millis = uptimeMillis();
}
}
//printf("Transact from Java code to %p sending: ", target); data->print();
//到这就和C++的Binder实现接上了
status_t err = target->transact(code, *data, reply, flags);
//if (reply) printf("Transact from Java code to %p received: ", target); reply->print();
if (kEnableBinderSample) {
if (time_binder_calls) {
conditionally_log_binder_call(start_millis, target, code);
}
}
if (err == NO_ERROR) {
return JNI_TRUE;
} else if (err == UNKNOWN_TRANSACTION) {
return JNI_FALSE;
}
signalExceptionForError(env, obj, err, true /*canThrowRemoteException*/, data->dataSize());
return JNI_FALSE;
}
2. server端读取数据/执行服务的实现机制
public class TestServer {
private static final String TAG = "TestServer";
public static void main(String args[])
{
/* add Service */
Slog.i(TAG, "add hello service");
ServiceManager.addService("hello", new HelloService());
//只需要让主线程活着就可以了。
//循环读数据,解析数据,调用函数的过程交由app_process启动程序时,创建的两个binder子线程完成
while (true)
{
try {
Thread.sleep(100);
} catch (Exception e){}
}
}
}
2.1 app_process为服务创建binder线程,实现读取数据
app_process源代码地址:app_process/app_main.cpp
- app_main运行TestServer服务后,它最终会导致app_main.cpp中AppRuntime类的onStarted函数被调用
- onStarted函数中通过ProcessState.startThreadPool创建子线程
- 最终在PoolThread中的threadLoop函数中调用IPCThreadState::self()->joinThreadPool(mIsMain)来读取/处理客户端数据
- IPCThreadState::self()->joinThreadPool在手写一个C++ Android Binder服务及源码分析中已经详细解释,这里不再赘述。
int main(int argc, char* const argv[])
{
if (prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0) < 0) {
// Older kernels don't understand PR_SET_NO_NEW_PRIVS and return
// EINVAL. Don't die on such kernels.
if (errno != EINVAL) {
LOG_ALWAYS_FATAL("PR_SET_NO_NEW_PRIVS failed: %s", strerror(errno));
return 12;
}
}
AppRuntime runtime(argv[0], computeArgBlockSize(argc, argv));
// Process command line arguments
// ignore argv[0]
argc--;
argv++;
// Everything up to '--' or first non '-' arg goes to the vm.
//
// The first argument after the VM args is the "parent dir", which
// is currently unused.
//
// After the parent dir, we expect one or more the following internal
// arguments :
//
// --zygote : Start in zygote mode
// --start-system-server : Start the system server.
// --application : Start in application (stand alone, non zygote) mode.
// --nice-name : The nice name for this process.
//
// For non zygote starts, these arguments will be followed by
// the main class name. All remaining arguments are passed to
// the main method of this class.
//
// For zygote starts, all remaining arguments are passed to the zygote.
// main function.
//
// Note that we must copy argument string values since we will rewrite the
// entire argument block when we apply the nice name to argv0.
int i;
for (i = 0; i < argc; i++) {
if (argv[i][0] != '-') {
break;
}
if (argv[i][1] == '-' && argv[i][2] == 0) {
++i; // Skip --.
break;
}
runtime.addOption(strdup(argv[i]));
}
// Parse runtime arguments. Stop at first unrecognized option.
bool zygote = false;
bool startSystemServer = false;
bool application = false;
String8 niceName;
String8 className;
++i; // Skip unused "parent dir" argument.
while (i < argc) {
const char* arg = argv[i++];
if (strcmp(arg, "--zygote") == 0) {
zygote = true;
niceName = ZYGOTE_NICE_NAME;
} else if (strcmp(arg, "--start-system-server") == 0) {
startSystemServer = true;
} else if (strcmp(arg, "--application") == 0) {
application = true;
} else if (strncmp(arg, "--nice-name=", 12) == 0) {
niceName.setTo(arg + 12);
} else if (strncmp(arg, "--", 2) != 0) {
className.setTo(arg);
break;
} else {
--i;
break;
}
}
Vector<String8> args;
if (!className.isEmpty()) {
// We're not in zygote mode, the only argument we need to pass
// to RuntimeInit is the application argument.
//
// The Remainder of args get passed to startup class main(). Make
// copies of them before we overwrite them with the process name.
args.add(application ? String8("application") : String8("tool"));
runtime.setClassNameAndArgs(className, argc - i, argv + i);
} else {
// We're in zygote mode.
maybeCreateDalvikCache();
if (startSystemServer) {
args.add(String8("start-system-server"));
}
char prop[PROP_VALUE_MAX];
if (property_get(ABI_LIST_PROPERTY, prop, NULL) == 0) {
LOG_ALWAYS_FATAL("app_process: Unable to determine ABI list from property %s.",
ABI_LIST_PROPERTY);
return 11;
}
String8 abiFlag("--abi-list=");
abiFlag.append(prop);
args.add(abiFlag);
// In zygote mode, pass all remaining arguments to the zygote
// main() method.
for (; i < argc; ++i) {
args.add(String8(argv[i]));
}
}
if (!niceName.isEmpty()) {
runtime.setArgv0(niceName.string());
set_process_name(niceName.string());
}
if (zygote) {
runtime.start("com.android.internal.os.ZygoteInit", args, zygote);
} else if (className) {
runtime.start("com.android.internal.os.RuntimeInit", args, zygote);
} else {
fprintf(stderr, "Error: no class name or --zygote supplied.\n");
app_usage();
LOG_ALWAYS_FATAL("app_process: no class name or --zygote supplied.");
return 10;
}
}
//我们暂且不分析app_main的细节,
//已知app_main运行TestServer服务后,它最终会导致AppRuntime类中的onStarted函数被调用
virtual void onStarted()
{
sp<ProcessState> proc = ProcessState::self();
ALOGV("App process: starting thread pool.\n");
//创建子线程
proc->startThreadPool();
AndroidRuntime* ar = AndroidRuntime::getRuntime();
//调用TestServer里面的main函数,启动TestServer程序
ar->callMain(mClassName, mClass, mArgs);
IPCThreadState::self()->stopProcess();
}
void ProcessState::startThreadPool()
{
AutoMutex _l(mLock);
if (!mThreadPoolStarted) {
mThreadPoolStarted = true;
spawnPooledThread(true);
}
}
void ProcessState::spawnPooledThread(bool isMain)
{
if (mThreadPoolStarted) {
String8 name = makeBinderThreadName();
ALOGV("Spawning new pooled thread, name=%s\n", name.string());
// 创建子线程
sp<Thread> t = new PoolThread(isMain);
// 运行子线程
t->run(name.string());
}
}
status_t Thread::run(const char* name, int32_t priority, size_t stack)
{
if (name == nullptr) {
ALOGW("Thread name not provided to Thread::run");
name = 0;
}
Mutex::Autolock _l(mLock);
if (mRunning) {
// thread already started
return INVALID_OPERATION;
}
// reset status and exitPending to their default value, so we can
// try again after an error happened (either below, or in readyToRun())
mStatus = NO_ERROR;
mExitPending = false;
mThread = thread_id_t(-1);
// hold a strong reference on ourself
mHoldSelf = this;
mRunning = true;
bool res;
if (mCanCallJava) {
res = createThreadEtc(_threadLoop,
this, name, priority, stack, &mThread);
} else {
//执行PoolThread中的threadLoop函数
res = androidCreateRawThreadEtc(_threadLoop,
this, name, priority, stack, &mThread);
}
if (res == false) {
mStatus = UNKNOWN_ERROR; // something happened!
mRunning = false;
mThread = thread_id_t(-1);
mHoldSelf.clear(); // "this" may have gone away after this.
return UNKNOWN_ERROR;
}
// Do not refer to mStatus here: The thread is already running (may, in fact
// already have exited with a valid mStatus result). The NO_ERROR indication
// here merely indicates successfully starting the thread and does not
// imply successful termination/execution.
return NO_ERROR;
// Exiting scope of mLock is a memory barrier and allows new thread to run
}
int Thread::_threadLoop(void* user)
{
Thread* const self = static_cast<Thread*>(user);
sp<Thread> strong(self->mHoldSelf);
wp<Thread> weak(strong);
self->mHoldSelf.clear();
#if defined(__ANDROID__)
// this is very useful for debugging with gdb
self->mTid = gettid();
#endif
bool first = true;
do {
bool result;
if (first) {
first = false;
self->mStatus = self->readyToRun();
result = (self->mStatus == NO_ERROR);
if (result && !self->exitPending()) {
// Binder threads (and maybe others) rely on threadLoop
// running at least once after a successful ::readyToRun()
// (unless, of course, the thread has already been asked to exit
// at that point).
// This is because threads are essentially used like this:
// (new ThreadSubclass())->run();
// The caller therefore does not retain a strong reference to
// the thread and the thread would simply disappear after the
// successful ::readyToRun() call instead of entering the
// threadLoop at least once.
result = self->threadLoop();
}
} else {
result = self->threadLoop();
}
// establish a scope for mLock
{
Mutex::Autolock _l(self->mLock);
if (result == false || self->mExitPending) {
self->mExitPending = true;
self->mRunning = false;
// clear thread ID so that requestExitAndWait() does not exit if
// called by a new thread using the same thread ID as this one.
self->mThread = thread_id_t(-1);
// note that interested observers blocked in requestExitAndWait are
// awoken by broadcast, but blocked on mLock until break exits scope
self->mThreadExitedCondition.broadcast();
break;
}
}
// Release our strong reference, to let a chance to the thread
// to die a peaceful death.
strong.clear();
// And immediately, re-acquire a strong reference for the next loop
strong = weak.promote();
} while(strong != 0);
return 0;
}
class PoolThread : public Thread
{
public:
PoolThread(bool isMain)
: mIsMain(isMain)
{
}
protected:
virtual bool threadLoop()
{
//到这儿就和C++实现的Binder接起来了
IPCThreadState::self()->joinThreadPool(mIsMain);
return false;
}
const bool mIsMain;
};
2.3 在某个线程的IPCThreadState的joinThreadPool函数中的循环里读取数据
在某个线程的IPCThreadState的joinThreadPool函数中的循环里读取数据
- 读到的数据中含有.prt/.cookie
- 把cookie转换成BBinder对象
- 调用BBinder派生类JavaBBinder的transact函数
- C++调用Binder.java中的execTransact函数
- 调用IHelloService.java中的onTransact函数
- 调用服务端的本地函数
void IPCThreadState::joinThreadPool(bool isMain)
{
LOG_THREADPOOL("**** THREAD %p (PID %d) IS JOINING THE THREAD POOL\n", (void*)pthread_self(), getpid());
mOut.writeInt32(isMain ? BC_ENTER_LOOPER : BC_REGISTER_LOOPER);
// This thread may have been spawned by a thread that was in the background
// scheduling group, so first we will make sure it is in the foreground
// one to avoid performing an initial transaction in the background.
set_sched_policy(mMyThreadId, SP_FOREGROUND);
status_t result;
do {
processPendingDerefs();
// now get the next command to be processed, waiting if necessary
result = getAndExecuteCommand();
if (result < NO_ERROR && result != TIMED_OUT && result != -ECONNREFUSED && result != -EBADF) {
ALOGE("getAndExecuteCommand(fd=%d) returned unexpected error %d, aborting",
mProcess->mDriverFD, result);
abort();
}
// Let this thread exit the thread pool if it is no longer
// needed and it is not the main process thread.
if(result == TIMED_OUT && !isMain) {
break;
}
} while (result != -ECONNREFUSED && result != -EBADF);
LOG_THREADPOOL("**** THREAD %p (PID %d) IS LEAVING THE THREAD POOL err=%p\n",
(void*)pthread_self(), getpid(), (void*)result);
mOut.writeInt32(BC_EXIT_LOOPER);
talkWithDriver(false);
}
status_t IPCThreadState::getAndExecuteCommand()
{
status_t result;
int32_t cmd;
//读取数据
result = talkWithDriver();
if (result >= NO_ERROR) {
size_t IN = mIn.dataAvail();
if (IN < sizeof(int32_t)) return result;
cmd = mIn.readInt32();
IF_LOG_COMMANDS() {
alog << "Processing top-level Command: "
<< getReturnString(cmd) << endl;
}
pthread_mutex_lock(&mProcess->mThreadCountLock);
mProcess->mExecutingThreadsCount++;
if (mProcess->mExecutingThreadsCount >= mProcess->mMaxThreads &&
mProcess->mStarvationStartTimeMs == 0) {
mProcess->mStarvationStartTimeMs = uptimeMillis();
}
pthread_mutex_unlock(&mProcess->mThreadCountLock);
//分析数据
result = executeCommand(cmd);
pthread_mutex_lock(&mProcess->mThreadCountLock);
mProcess->mExecutingThreadsCount--;
if (mProcess->mExecutingThreadsCount < mProcess->mMaxThreads &&
mProcess->mStarvationStartTimeMs != 0) {
int64_t starvationTimeMs = uptimeMillis() - mProcess->mStarvationStartTimeMs;
if (starvationTimeMs > 100) {
ALOGE("binder thread pool (%zu threads) starved for %" PRId64 " ms",
mProcess->mMaxThreads, starvationTimeMs);
}
mProcess->mStarvationStartTimeMs = 0;
}
pthread_cond_broadcast(&mProcess->mThreadCountDecrement);
pthread_mutex_unlock(&mProcess->mThreadCountLock);
// After executing the command, ensure that the thread is returned to the
// foreground cgroup before rejoining the pool. The driver takes care of
// restoring the priority, but doesn't do anything with cgroups so we
// need to take care of that here in userspace. Note that we do make
// sure to go in the foreground after executing a transaction, but
// there are other callbacks into user code that could have changed
// our group so we want to make absolutely sure it is put back.
set_sched_policy(mMyThreadId, SP_FOREGROUND);
}
return result;
}
status_t IPCThreadState::executeCommand(int32_t cmd)
{
BBinder* obj;
RefBase::weakref_type* refs;
status_t result = NO_ERROR;
switch ((uint32_t)cmd) {
......
// 处理数据
case BR_TRANSACTION:
{
binder_transaction_data tr;
result = mIn.read(&tr, sizeof(tr));
ALOG_ASSERT(result == NO_ERROR,
"Not enough command data for brTRANSACTION");
if (result != NO_ERROR) break;
Parcel buffer;
buffer.ipcSetDataReference(
reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
tr.data_size,
reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
tr.offsets_size/sizeof(binder_size_t), freeBuffer, this);
const pid_t origPid = mCallingPid;
const uid_t origUid = mCallingUid;
const int32_t origStrictModePolicy = mStrictModePolicy;
const int32_t origTransactionBinderFlags = mLastTransactionBinderFlags;
mCallingPid = tr.sender_pid;
mCallingUid = tr.sender_euid;
mLastTransactionBinderFlags = tr.flags;
int curPrio = getpriority(PRIO_PROCESS, mMyThreadId);
if (gDisableBackgroundScheduling) {
if (curPrio > ANDROID_PRIORITY_NORMAL) {
// We have inherited a reduced priority from the caller, but do not
// want to run in that state in this process. The driver set our
// priority already (though not our scheduling class), so bounce
// it back to the default before invoking the transaction.
setpriority(PRIO_PROCESS, mMyThreadId, ANDROID_PRIORITY_NORMAL);
}
} else {
if (curPrio >= ANDROID_PRIORITY_BACKGROUND) {
// We want to use the inherited priority from the caller.
// Ensure this thread is in the background scheduling class,
// since the driver won't modify scheduling classes for us.
// The scheduling group is reset to default by the caller
// once this method returns after the transaction is complete.
set_sched_policy(mMyThreadId, SP_BACKGROUND);
}
}
//ALOGI(">>>> TRANSACT from pid %d uid %d\n", mCallingPid, mCallingUid);
Parcel reply;
status_t error;
IF_LOG_TRANSACTIONS() {
TextOutput::Bundle _b(alog);
alog << "BR_TRANSACTION thr " << (void*)pthread_self()
<< " / obj " << tr.target.ptr << " / code "
<< TypeCode(tr.code) << ": " << indent << buffer
<< dedent << endl
<< "Data addr = "
<< reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer)
<< ", offsets addr="
<< reinterpret_cast<const size_t*>(tr.data.ptr.offsets) << endl;
}
if (tr.target.ptr) {
// We only have a weak reference on the target object, so we must first try to
// safely acquire a strong reference before doing anything else with it.
if (reinterpret_cast<RefBase::weakref_type*>(
tr.target.ptr)->attemptIncStrong(this)) {
//将读取到的cookie转为BBinder,这个BBinder是派生类JavaBBinder,
//执行BBinder的transact函数
error = reinterpret_cast<BBinder*>(tr.cookie)->transact(tr.code, buffer,
&reply, tr.flags);
reinterpret_cast<BBinder*>(tr.cookie)->decStrong(this);
} else {
error = UNKNOWN_TRANSACTION;
}
} else {
error = the_context_object->transact(tr.code, buffer, &reply, tr.flags);
}
//ALOGI("<<<< TRANSACT from pid %d restore pid %d uid %d\n",
// mCallingPid, origPid, origUid);
if ((tr.flags & TF_ONE_WAY) == 0) {
LOG_ONEWAY("Sending reply to %d!", mCallingPid);
if (error < NO_ERROR) reply.setError(error);
sendReply(reply, 0);
} else {
LOG_ONEWAY("NOT sending reply to %d!", mCallingPid);
}
mCallingPid = origPid;
mCallingUid = origUid;
mStrictModePolicy = origStrictModePolicy;
mLastTransactionBinderFlags = origTransactionBinderFlags;
IF_LOG_TRANSACTIONS() {
TextOutput::Bundle _b(alog);
alog << "BC_REPLY thr " << (void*)pthread_self() << " / obj "
<< tr.target.ptr << ": " << indent << reply << dedent << endl;
}
}
break;
.......
default:
printf("*** BAD COMMAND %d received from Binder driver\n", cmd);
result = UNKNOWN_ERROR;
break;
}
if (result != NO_ERROR) {
mLastError = result;
}
return result;
}
status_t BBinder::transact(
uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
data.setDataPosition(0);
status_t err = NO_ERROR;
switch (code) {
case PING_TRANSACTION:
reply->writeInt32(pingBinder());
break;
default:
err = onTransact(code, data, reply, flags);
break;
}
if (reply != NULL) {
reply->setDataPosition(0);
}
return err;
}
//class JavaBBinder
class JavaBBinder : public BBinder
{
public:
JavaBBinder(JNIEnv* env, jobject object)
: mVM(jnienv_to_javavm(env)), mObject(env->NewGlobalRef(object))
{
ALOGV("Creating JavaBBinder %p\n", this);
android_atomic_inc(&gNumLocalRefs);
incRefsCreated(env);
}
bool checkSubclass(const void* subclassID) const
{
return subclassID == &gBinderOffsets;
}
jobject object() const
{
return mObject;
}
protected:
virtual ~JavaBBinder()
{
ALOGV("Destroying JavaBBinder %p\n", this);
android_atomic_dec(&gNumLocalRefs);
JNIEnv* env = javavm_to_jnienv(mVM);
env->DeleteGlobalRef(mObject);
}
virtual status_t onTransact(
uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags = 0)
{
JNIEnv* env = javavm_to_jnienv(mVM);
ALOGV("onTransact() on %p calling object %p in env %p vm %p\n", this, mObject, env, mVM);
IPCThreadState* thread_state = IPCThreadState::self();
const int32_t strict_policy_before = thread_state->getStrictModePolicy();
//printf("Transact from %p to Java code sending: ", this);
//data.print();
//printf("\n");
//mObject指向Java的HelloService对象
jboolean res = env->CallBooleanMethod(mObject, gBinderOffsets.mExecTransact,
code, reinterpret_cast<jlong>(&data), reinterpret_cast<jlong>(reply), flags);
if (env->ExceptionCheck()) {
jthrowable excep = env->ExceptionOccurred();
report_exception(env, excep,
"*** Uncaught remote exception! "
"(Exceptions are not yet supported across processes.)");
res = JNI_FALSE;
/* clean up JNI local ref -- we don't return to Java code */
env->DeleteLocalRef(excep);
}
// Check if the strict mode state changed while processing the
// call. The Binder state will be restored by the underlying
// Binder system in IPCThreadState, however we need to take care
// of the parallel Java state as well.
if (thread_state->getStrictModePolicy() != strict_policy_before) {
set_dalvik_blockguard_policy(env, strict_policy_before);
}
if (env->ExceptionCheck()) {
jthrowable excep = env->ExceptionOccurred();
report_exception(env, excep,
"*** Uncaught exception in onBinderStrictModePolicyChange");
/* clean up JNI local ref -- we don't return to Java code */
env->DeleteLocalRef(excep);
}
// Need to always call through the native implementation of
// SYSPROPS_TRANSACTION.
if (code == SYSPROPS_TRANSACTION) {
BBinder::onTransact(code, data, reply, flags);
}
//aout << "onTransact to Java code; result=" << res << endl
// << "Transact from " << this << " to Java code returning "
// << reply << ": " << *reply << endl;
return res != JNI_FALSE ? NO_ERROR : UNKNOWN_TRANSACTION;
}
virtual status_t dump(int fd, const Vector<String16>& args)
{
return 0;
}
private:
JavaVM* const mVM;
jobject const mObject;
};
//gBinderOffsets.mExecTransac指向android/os/Binder java类中的execTransact方法
//而这个android/os/Binder java类就是它的派生类HelloService对象
gBinderOffsets.mExecTransact = GetMethodIDOrDie(env, clazz, "execTransact", "(IJJI)Z");
const char* const kBinderPathName = "android/os/Binder";
jclass clazz = FindClassOrDie(env, kBinderPathName);
//Binder.java
private boolean execTransact(int code, long dataObj, long replyObj,
int flags) {
Parcel data = Parcel.obtain(dataObj);
Parcel reply = Parcel.obtain(replyObj);
// theoretically, we should call transact, which will call onTransact,
// but all that does is rewind it, and we just got these from an IPC,
// so we'll just call it directly.
boolean res;
// Log any exceptions as warnings, don't silently suppress them.
// If the call was FLAG_ONEWAY then these exceptions disappear into the ether.
try {
//这个onTransact方法就是HelloService对象实现的方法(由其父类实现)
res = onTransact(code, data, reply, flags);
} catch (RemoteException|RuntimeException e) {
if (LOG_RUNTIME_EXCEPTION) {
Log.w(TAG, "Caught a RuntimeException from the binder stub implementation.", e);
}
if ((flags & FLAG_ONEWAY) != 0) {
if (e instanceof RemoteException) {
Log.w(TAG, "Binder call failed.", e);
} else {
Log.w(TAG, "Caught a RuntimeException from the binder stub implementation.", e);
}
} else {
reply.setDataPosition(0);
reply.writeException(e);
}
res = true;
} catch (OutOfMemoryError e) {
// Unconditionally log this, since this is generally unrecoverable.
Log.e(TAG, "Caught an OutOfMemoryError from the binder stub implementation.", e);
RuntimeException re = new RuntimeException("Out of memory", e);
reply.setDataPosition(0);
reply.writeException(re);
res = true;
}
checkParcel(this, code, reply, "Unreasonably large binder reply buffer");
reply.recycle();
data.recycle();
// Just in case -- we are done with the IPC, so there should be no more strict
// mode violations that have gathered for this thread. Either they have been
// parceled and are now in transport off to the caller, or we are returning back
// to the main transaction loop to wait for another incoming transaction. Either
// way, strict mode begone!
StrictMode.clearGatheredViolations();
return res;
}
}
//IHelloService.java
@Override public boolean onTransact(int code, android.os.Parcel data, android.os.Parcel reply, int flags) throws android.os.RemoteException
{
switch (code)
{
case INTERFACE_TRANSACTION:
{
reply.writeString(DESCRIPTOR);
return true;
}
case TRANSACTION_sayhello:
{
data.enforceInterface(DESCRIPTOR);
this.sayhello();
reply.writeNoException();
return true;
}
case TRANSACTION_sayhello_to:
{
data.enforceInterface(DESCRIPTOR);
java.lang.String _arg0;
_arg0 = data.readString();
int _result = this.sayhello_to(_arg0);
reply.writeNoException();
reply.writeInt(_result);
return true;
}
}
return super.onTransact(code, data, reply, flags);
}
public class HelloService extends IHelloService.Stub {
private static final String TAG = "HelloService";
private int cnt1 = 0;
private int cnt2 = 0;
public void sayhello() throws android.os.RemoteException {
cnt1++;
Slog.i(TAG, "sayhello : cnt = "+cnt1);
}
public int sayhello_to(java.lang.String name) throws android.os.RemoteException {
cnt2++;
Slog.i(TAG, "sayhello_to "+name+" : cnt = "+cnt2);
return cnt2;
}
}
2.4 cookie中的数据什么时候存储的?
猜测:
- new HelloService()是java对象
- 处理数据时把.cookie转换成BBinder对象,它是C++对象
- 所以,addService中肯定会把JAVA对象转换成一个BBinder派生类对象,存在.cookie中
结论:
- addService会通过JNI调用C++函数,创建一个BBinder派生类JavaBBinder对象,其中的mObject指向JAVA对象new HelloService(),它含有onTransact函数,然后把这个JavaBBinder对象存入flat_binder_object对象的cookie中(最终存入binder驱动中该服务对应的binder_node.cookie中)
- server进程从驱动中读到cookie,转为BBinder对象,调用它的transact函数,然后transact函数最终调用到BBinder的派生类即JavaBBinder中定义的onTransact函数(C++实现)
- 调用IHelloService中定义的onTranscat函数(JAVA实现),分析数据,调用sayhello/sayhello_to
代码验证:
public class TestServer {
private static final String TAG = "TestServer";
public static void main(String args[])
{
/* add Service */
Slog.i(TAG, "add hello service");
ServiceManager.addService("hello", new HelloService());
//只需要让主线程活着就可以了。
//循环读数据,解析数据,调用函数的过程交由app_process启动程序时,创建的两个binder子线程完成
while (true)
{
try {
Thread.sleep(100);
} catch (Exception e){}
}
}
}
// 之前分析过,ServiceManager.addService,最终会调用ServiceManagerProxy类中的addService
public void addService(String name, IBinder service, boolean allowIsolated)
throws RemoteException {
Parcel data = Parcel.obtain();
Parcel reply = Parcel.obtain();
data.writeInterfaceToken(IServiceManager.descriptor);
data.writeString(name);
data.writeStrongBinder(service);
data.writeInt(allowIsolated ? 1 : 0);
mRemote.transact(ADD_SERVICE_TRANSACTION, data, reply, 0);
reply.recycle();
data.recycle();
}
/**
* Write an object into the parcel at the current dataPosition(),
* growing dataCapacity() if needed.
*/
public final void writeStrongBinder(IBinder val) {
nativeWriteStrongBinder(mNativePtr, val); // val = service = new HelloService()
}
//android_os_Parcel.cpp
{"nativeWriteStrongBinder", "(JLandroid/os/IBinder;)V", (void*)android_os_Parcel_writeStrongBinder},
/*
该方法会构造一个JavaBBinder对象(C++对象),它的mObject=new HelloService()对象(java对象)
然后让.cookie=JavaBBinder
*/
static void android_os_Parcel_writeStrongBinder(JNIEnv* env, jclass clazz, jlong nativePtr, jobject object)
{
//把java的Parcel对象转换为C++的Parcel对象
Parcel* parcel = reinterpret_cast<Parcel*>(nativePtr);
if (parcel != NULL) {
//ibinderForJavaObject是把一个java对象转为IBinder
const status_t err = parcel->writeStrongBinder(ibinderForJavaObject(env, object)); //object = new HelloService()
if (err != NO_ERROR) {
signalExceptionF orError(env, clazz, err);
}
}
}
//把转换为IBinder的java对象写入cookie
status_t Parcel::writeStrongBinder(const sp<IBinder>& val)
{
return flatten_binder(ProcessState::self(), val, this);
}
status_t flatten_binder(const sp<ProcessState>& /*proc*/,
const sp<IBinder>& binder, Parcel* out)
{
flat_binder_object obj;
obj.flags = 0x7f | FLAT_BINDER_FLAG_ACCEPTS_FDS;
if (binder != NULL) {
IBinder *local = binder->localBinder();
if (!local) {
BpBinder *proxy = binder->remoteBinder();
if (proxy == NULL) {
ALOGE("null proxy");
}
const int32_t handle = proxy ? proxy->handle() : 0;
obj.type = BINDER_TYPE_HANDLE;
obj.binder = 0; /* Don't pass uninitialized stack data to a remote process */
obj.handle = handle;
obj.cookie = 0;
} else {
obj.type = BINDER_TYPE_BINDER;
obj.binder = reinterpret_cast<uintptr_t>(local->getWeakRefs());
obj.cookie = reinterpret_cast<uintptr_t>(local);
}
} else {
obj.type = BINDER_TYPE_BINDER;
obj.binder = 0;
obj.cookie = 0;
}
return finish_flatten_binder(binder, obj, out);
}
sp<IBinder> ibinderForJavaObject(JNIEnv* env, jobject obj) //obj = new HelloService()
{
if (obj == NULL) return NULL;
if (env->IsInstanceOf(obj, gBinderOffsets.mClass)) {
JavaBBinderHolder* jbh = (JavaBBinderHolder*)
env->GetLongField(obj, gBinderOffsets.mObject);
return jbh != NULL ? jbh->get(env, obj) : NULL;
}
if (env->IsInstanceOf(obj, gBinderProxyOffsets.mClass)) {
return (IBinder*)
env->GetLongField(obj, gBinderProxyOffsets.mObject);
}
ALOGW("ibinderForJavaObject: %p is not a Binder object", obj);
return NULL;
}
class JavaBBinderHolder : public RefBase
{
public:
sp<JavaBBinder> get(JNIEnv* env, jobject obj)
{
AutoMutex _l(mLock);
sp<JavaBBinder> b = mBinder.promote();
if (b == NULL) {
b = new JavaBBinder(env, obj); //obj = new HelloService()
mBinder = b;
ALOGV("Creating JavaBinder %p (refs %p) for Object %p, weakCount=%" PRId32 "\n",
b.get(), b->getWeakRefs(), obj, b->getWeakRefs()->getWeakCount());
}
return b;
}
sp<JavaBBinder> getExisting()
{
AutoMutex _l(mLock);
return mBinder.promote();
}
private:
Mutex mLock;
wp<JavaBBinder> mBinder;
};
JavaBBinder(JNIEnv* env, jobject object)
: mVM(jnienv_to_javavm(env)), mObject(env->NewGlobalRef(object)) //mObject指向了HelloService的java
{
ALOGV("Creating JavaBBinder %p\n", this);
android_atomic_inc(&gNumLocalRefs);
incRefsCreated(env);
}
后记
如前言所说,android binder是android中各个服务进程进行沟通的桥梁,如果想要阅读android源码,binder是不可或缺的一个技术点,掌握binder技术原理,对于我们android源码的阅读和理解至关重要。
历经大半年的时间终于把android binder啃下来了,加上之前的四篇文章,android binder系列一共5篇文章,如果能够认真阅读完,相信会对各位博友有一定的帮助。
深入内核讲明白Android Binder【一】
深入内核讲明白Android Binder【二】
深入内核讲明白Android Binder【三】
手写一个C++ Android Binder服务及源码分析
再次感谢:
韦东山老师的Binder课程
说实话,binder不容易理解,老师的视频我应该看了5遍,再结合自己的源码分析,才有了一定的自我理解。