Google I/O 2013 – Volley: Easy, Fast Networking for Android

By | January 17, 2020


>>Ficus Kirkpatrick: Hi. I’m Ficus Kirkpatrick.
I’m an engineer on the Android team. I work on the Play Store. And I’m here to talk to
you today about something we made that makes it really easy to develop super fast networked
applications for Android. [ Applause ]
>>Ficus Kirkpatrick: All right. Very good! It’s called Volley.
So what is Volley? Well, it’s a networking library. So when I say “Volley,” you might
be thinking of something like this, you know, a client sends a request to a server over
the net. It comes back. Metaphor complete. Message received.
But when I named it, I was really thinking of something more like this.
Mobile app design is getting better and better. But also more and more demanding for engineers.
A lot of current designs are really beautiful, with a lot of rich metadata and tons of images.
But these things requires tons of network requests to get all that data and images.
And there’s a tremendous opportunity to make our apps faster by running those requests
in parallel. So you probably don’t want to do quite as
many requests in parallel as shown up here. I think if we stick to the metaphor, that
looks kind of like a denial of service attack. [ Laughter ]
>>Ficus Kirkpatrick: But a volley does make it very easy for you to run your request concurrently
without really having to think about threading or networking yourself.
Sorry, threading or synchronization yourself. The library comes with most things you need
right out of the box, things like accessing JSON APIs or loading images are trivial to
do with Volley. But it’s easy to grow with your needs as well.
There’s a lot of customization options. You can do your own custom retry and backoff algorithms.
You can customize request priority and ordering, and a lot more.
It also includes a very powerful request tracing system that we’ve found invaluable in debugging
and profiling our network activity on the Play Store.
Okay. So why do we even need a networking library at all? Android already has Apache
HTTP client. It already has HTTP URL connection. True, it sure does. People have made tons
of great apps with them. So here are four apps that I use all the time
on my phone, Foursquare, Twitter, YouTube, and News and Weather. These apps are great.
I love them. And they have a lot of things in common. They display paginated lists of
items, typically with a thumbnail for each item. You can click on something and view
an item with more metadata or more images. And there’s typically some kind of write operation
or post request type thing, like posting a tweet or writing a restaurant review or something
like that. And all of these apps had to reinvent several
wheels to do what are, at heart, basically the same tasks.
Doing response caching or things are fast, getting retry right, executing requests in
parallel. Everybody pretty much rolls their own.
So what we wanted to do with Volley was both provide an interface where you can step up
a little bit and think a little less about wire protocols and network transports and
more about logical operations as well as just wrap up a lot of the things that we’ve learned
in developing production-quality apps at Google over the last few years.
So Volley was designed for exactly the kinds of needs that those applications have. Relatively
small, RPC-like operations, fetching some metadata or getting a profile picture, and
put them in the view hierarchy. It’s also perfectly fine for doing those same
kinds of operations in the background. But it really excels at the intersection of UI
and network. It’s not for everything, though. Responses are delivered whole in memory. So
that makes for an API that’s really easy to use but not a good one for streaming operations
like downloading a video or an MP3. But that’s okay. You don’t have to use it for everything.
There are several apps at Google that use Volley for their RPCs and use another system,
like Download Manager or whatever, for long-lived, large file download operations.
Okay. So let’s take a simplified example of that kind of common pattern that I talked
about. I built a sample app that shows a title and a description loaded page by page from
a server, with a nice-looking thumbnail image for each item.
Okay. So we have a server that speaks a relatively simple JSON protocol. This is, by the way,
going to be the basis of my microblogging startup. So if any of you are angel investors,
please come see me afterward. [ Laughter ]
>>Ficus Kirkpatrick: The protocol is really simple.
We’ve got — the outer response is just a list of item JSON objects with an optional
token to retrieve the next page. And each item is simply a title, a description,
and an image URL all as strings. Pretty straightforward. On the client side, again, we have a pretty
typical application architecture. We’ve got an activity. It holds an adapter and a ListView.
The adapter loads data from the network from the API server and manufactures views for
the ListView upon request. Okay. So here we are in get view in our adapter
in a typical implementation. The standard trick here is to use the position passed in
to GetView as a hit for when you should download the next page. For example, if you have ten
items in your page, when the ListView asks you for the eighth thing, you go ahead and
call load more data to fetch the next page of data.
Okay. Now, how is load more data actually implemented?
The usual approach is to use an async task or something like that. So here in do in background,
we open an HTTP URL connection. We get the input stream for the response data. We make
a byte array output stream to copy it into. We use this magical function that I’ve written
100 times in my life to copy it from the input stream to the output stream. And then we pass
it to the JSON object constructor for parsing and return it back.
So pretty straightforward conceptually. One thing to note here is that there’s actually
a ton of boilerplate crap I had to cut out in order to make this fit on one slide. So
you don’t see all the extra async task stuff. And I think more annoyingly, you don’t see
all of the try-catch blocks here. You have to try-catch for I/O exception if there’s
a network problem or JSON exception, if there’s a parsing error. And then you have to close
your input stream. And then closing your input stream can throw two. So you need to try-catch
there and have a finally block. And this whole thing is probably twice the size or more if
you were to see it in something that would compile.
Finally, we land back in onPostExecute on the main thread with our parse JSON object.
We pluck out the individual items from that root JSON object, append them to the list
that backs our adapter, call notify data set changed, and then let the ListView call us
back to actually make the views. Okay. Now we’re back in GetView with some
data. So now we can populate our two text views with the title and description strings.
And we now have the image URL, so we can kick that off for loading.
Again, if you’re familiar with, this you typically use an async task or something like that.
So let’s look at a naive implementation of an image-loading async task. Here in the constructor,
we squirrel away the image view that we want to populate from the image at the given URL.
And then in background, it’s pretty straightforward from there. You open the HTTP URL connection,
you get the response stream and pass it off bit map factory for image decoding.
Again, I had to cut out all of the try-catch, you know, muckety-muck boilerplate stuff.
It’s all pretty straightforward so far. But tedious.
Finally, again, we get back onto the main thread in onPostExecute. And we just simply
set image bit map on our image view that we saved in the constructor.
Okay. So there are actually a whole bunch of problems with the code I just showed you.
Things ranging from outright bugs to things that are just way too inefficient to ship.
So I want to talk about some of the higher-level issues and problems and then discuss Volley’s
approach to solving them. Okay. First, the code as written, all network
requests execute in serial, so one after the other. And this is because you called asynctask.execute,
and I’m blaming you for calling it even though I wrote the code. And you didn’t call async
task on execute or executer. Or you didn’t pass a thread sync executer to async task.
Or maybe you didn’t even know you had to do that.
It also means that in order to fetch the next page of data, if the user is scrolling down,
you have to wait for all the image requests to complete, because things are scheduled
strictly first in, first out. Volley automatically schedules your network
requests across a thread pool. You can choose the size of the pool or just use the default.
We had the best results with four threads in our testing in the Play Store. So that’s
the default we chose. Volley also supports request prioritization.
So you can set your main metadata at high or medium priority, set your images to low,
and then if you even — if you have even a single thread, you never have to wait for
more than one image request to complete before you can fetch that next page of data and let
the user continue scrolling down. If you rotate the screen, your activity gets
destroyed, of course. It gets recreated, and you start over from scratch.
So that — in this case, that means reloading absolutely everything from the network.
So this isn’t a big deal; right? You write your own cache. You — maybe you put your
images in a hash map or an LRU cache. And, of course, you still have to do that, which
is tedious. And it also doesn’t help you for the next time the user launches your app and
you need to reload everything again because those caches are only going to live for the
lifetime of your process. Volley provides transparent disk and memory
caching of responses out of the box. So what does transparent caching mean? It just means
the caller doesn’t have to know about the existence of the cache. When there’s a cache
hit, it simply behaves like a super fast network response.
You can provide your own cache implementation, or we have one that you can use in the toolbox
right out of the box. And it’s very fast. We typically see response times of a few milliseconds
for a typically, you know, 10 to 50K advertised JPEG image.
Volley also provides some advanced tools particular to image loading. I’ll talk about that in
a minute. Those of you who have written an app like
this probably saw the bug in my load image async task right off the bat.
We’ve got a bunch of async tasks in flight and they’re all loading the images. And maybe
the user is scrolling around like crazy and the views are getting recycled in the ListView.
So by the time the async task actually completes, on post execute runs and we set the bit map
in an image view that doesn’t really belong to the async task anymore.
Okay. So what do you do here? You have a view holder, and you maybe keep the URL in it and
make sure that it matches. But then you’re letting a request complete which it doesn’t
matter anymore. You’re just going to throw the results away, which makes things slow
and it’s wasteful. So then maybe you keep track of your async tasks and cancel them
when you get recycled. Again, more work. Volley provides a really powerful request
cancellation API where you can easily cancel a single request or you can set blocks or
scopes of requests to cancel. Or you can just use the network image view that I’m going
to talk about in a second from the toolbox, which does everything for you.
Okay. One last problem. Until Gingerbread, HTTP URL connection was buggy and flaky. So
for those of you who would try to run this simple code on Froyo or before, you might
run into some problems here. So you could use Apache HTTP client, but that’s
also buggy and no longer supported. So that’s okay. Volley abstracts away the underlying
network transport. So, in fact, if you use our standard setup helper methods, on Froyo,
you’ll automatically get an Apache stack, which works best there, and on Gingerbread
and above, you automatically get an HTTP connection stack.
One really nice thing about this approach is that it doesn’t just solve the problem
of this kind of varying behavior of these HTTP stacks, but it also creates an opportunity.
So say you want to port to Square’s new OkHttp library.
>>>Whoo!>>Ficus Kirkpatrick: Yeah, I’m a fan.
You can just replace that in one place in your app without having to go do a complete
refactor of your code. It’s, like, a much more targeted place.
Okay. So let’s look at how we’d implement this map uses Volley.
There are two main things you interact with in Volley, two main classes: Request queue
and request. Request queue is the interface you use for dispatching requests to the network.
You can make a request queue on demand if you want, but typically, you’ll instead create
it early on, at startup time, and keep it around and use it as a Singleton.
Here we also make an image loader. And I’ll talk about that in a little bit.
Okay. Here’s something dense. Here in the Volley version of load more data,
we make a JSON object request, which is one of the standard requests that comes stock
in the Volley toolbox. And we add it to the queue. We provide it with a listener. And
a listener is the standard callback interface you use for receiving your responses. And
the response listener here looks a whole lot like the async task version in onPostExecute.
We just parse out the JSON, append the items to our backing list, and call notify data
set changed. So there are two bigger things I want to point
out here. One is, this is it. This is the whole thing. I went on and on about the boilerplate
before. The boilerplate is gone. This is the whole thing, end to end.
Second, the fact that this response listener looks a lot like onpostexecute in the async
task version means it’s easy to port your app to Volley. So you don’t have to boil the
ocean and rewrite your app if you want to start taking advantage of this.
Okay. Getting the image is even easier. We just stowaway the image request in our view
holder and cancel it when we’re getting recycled. And image loader, again from the Volley toolbox,
handles the rest. It provides the memory caching for you. It — you can even pass in drawables
for loading or error states. And image loader will take it away from there for you.
But maybe you’re a programmer after my own heart, which is to say, super lazy.
You can just use network image view in the Volley toolbox. It’s a subclass of image view.
You can simply replace your usages of image view in your layouts with this. You call set
image URL, and you’re done. When it gets detached from the view hierarchy, the request is automatically
cancelled if it’s still in flight. That’s it.
One — one subtle but I think pretty awesome thing about Volley’s image loading code is
response batching. So if you use image loader or network image
view, the library holds your image responses for a short period of time after — after
one comes and attempts to batch them together in a single path to the main thread.
What this means in practice is that batches and images load in the UI simultaneously.
Or, for instance, say you’re running an animation when you load an image, like, you fade it
in. All those things fade in together in sync. And just a little secret sauce on it.
Okay. So the image stuff, I think, shows that Volley handles the basics out of the box pretty
— pretty strongly. But I also want to show what happens when you run out of the toolbox
and you need to roll your own. We have a toolbox, but it doesn’t include
everything. But it’s really easy to extend. So let’s take a look at making a new type
of request. If you haven’t heard of the GSON library,
it’s a JSON serialization and deserialization library that uses reflection to populate your
Java model objects from JSON objects. You just make Java classes that look like your
JSON schema, pass it all to GSON, and it figures it out for you.
So here’s the relevant snippet for a GSON adapter for Volley. This whole thing is about
60 lines of code. This is kind of the meat of it. You can see there’s a gist URL on screen.
You can look at the whole thing to drop into your app if you want. But it’s — you know,
it amounts to, you know, a few lines of code. So we make a string out of the data that comes
back from the server. We pass that to GSONs from JSON method along with the class object
of our root response that helps with reflection. We use Volley’s HTTP header parser helpers
from the toolbox. And that’s it. You can even see we handle those parse errors. And that
happens at the response time, not necessarily in your application code where you’re thinking
more about errors and less about every different type of them that could happen.
Okay. Let’s take another look at load more data with that new cool request we just made.
So instead of using JSON object request, we make a new GSON request and pass in the class
object of our list response. And then that’s it. It’s even smaller. We append the items
to our backing store and then call notify data set changed.
One cool little thing about this is that it means that the parsing actually happens on
the background worker thread instead of in the main thread, like when we do it on onPostExecute,
so you get a little bit more parallelism there for free.
Okay. I want to change gears a little bit and talk about the underlying implementation
of Volley, kind of how it works, some of the semantics it defines, and give a little look
under the hood as the implementer, not necessarily as the user.
So Volley, like I mentioned before, handles operating a thread pool for you. There’s one
cache dispatcher thread, there’s one or more network dispatcher threads. This diagram shows
the flow of a request through the dispatch pipeline.
The blue boxes are the main thread. That’s typically going to be your calling code and
the response handler. The green boxes are the cache dispatcher thread. And the orange
boxes are the network dispatcher threads. The cache dispatcher’s job is essentially
triage. It figures out whether we can re- — service a request entirely from cache or
whether we have to go to the network to get it, such as in the case of a cache miss or
perhaps a cache hit, but that’s expired. In the case of an expired cache hit, it’s
also responsible for forwarding the cache response to the network dispatchers so that
they can be reused in the event of an ETag match and the server has an opportunity to
give you a 304. When we do end up needing to go to the network,
like in the case of — like in the cases I mentioned, the network dispatcher services
the requests over HTTP, handles parsing the responses using that parsing code I just showed,
and then posts the response back to the main thread as a whole parsed object.
As a user of the library, you typically don’t have to think about any of this stuff. You
enqueue your request from the main thread, you get your responses on the main thread,
and that’s it. But if you really want to dig in and learn
more about the request pipeline, Volley has a fire hose of debugging information that’s
available for you if you want it. You can just set a system property, and you get verbose
output of the complete lifetime of all stages of the request dispatch pipeline.
Here’s one random chunk of log I took from my sample app when it was in the middle of
loading a bunch of images. At the top, you can see that this request
took 443 milliseconds. It was low priority, because it’s an image. And it was the 11th
request processed by this dispatcher. Each subsequent line below that describes
another step through the dispatch pipeline. You can see on thread one, that’s the main
thread, at — oh, and, by the way, each line shows the incremental time that that step
took. On the main thread, the request is added to
the queue. 68 milliseconds later, it arrives at the cache dispatcher, which means there
was a little bit of contention for the cache — the cache triage thread there.
The cache dispatcher figures out right away that it’s a hit but it’s expired, and it forwards
it along to the network dispatcher pool. Now, it takes 136 milliseconds to begin processing
by the network dispatcher, which is even longer, so there’s actually even a bit more contention
for the network dispatchers than for the cache dispatcher, you can see.
It takes — the HTTP request takes 127 milliseconds. Now, this is an expired cache hit for an image,
so one thing to note is that this would typically be much faster and you’d get that 304. But
this is a homemade Web server I made, and it’s crappy and doesn’t have any features.
So don’t do that. And finally, parsing the request takes about
100 milliseconds. In this case, it’s an image request, so parsing means image decoding.
One side note. 100 milliseconds is actually a pretty long time to decode an image. What’s
actually going on here is there’s a whole flurry of requests going on and our image
decoder doesn’t allow more concurrent image decodes than there are CPU cores, which is
one more thing we found to be optimal in our testing and development on the Play Store.
Lastly, we write the response bytes to the cache from the network dispatcher. And we
post the parsed object back to the main thread, and we’re done.
So this is a pretty advanced feature, and I don’t think most people will use it. But
we have really found it invaluable in digging in and profiling our use in the Play Store.
Here’s one common thing that you can find using this log.
So if you’re seeing a large amount of time between post response and done, what that
means is that there’s a lot of contention for the main thread. If there’s a lot of contention
for the main thread, you’re doing too much on the main thread. You can dig in from there.
Okay. So why do I keep going on about the main thread?
Obviously, you can’t touch the UI from a background thread, so you’d have to post back there anyway.
But my obsession with the main thread — and ask anyone who works with me, it is an obsession
— is about more than just the inconvenience of having to post a rentable back. When you
do everything on one thread and don’t share any state with your workers, synchronization
is implicit. You don’t have to think about locking. You don’t want to think about locking,
because locking is hard. And I’ll give you an example.
How many people have ever written a block of code that looks like this?
A few? Okay. Me, too.
So here’s what happens. Here we are, our async task is completed. We’re in onPostExecute,
and something has crashed. We’re touching the UI. The UI is dead. But it’s way too late
in the schedule to figure out why this is happening, so we cram a little null pointer
check in our code. It doesn’t crash anymore, and we head off to the launch party.
[ Laughter ]>>Ficus Kirkpatrick: But it’s a waste of effort,
it’s a waste of CPU cycles, it’s a waste of network bandwidth, and it’s a waste of battery
to allow a request to finish that you’re just going to ignore the result of.
So it also leaves these warts of, like, if null checks at the top of all your methods
all over your code. I mentioned Volley has a powerful cancellation
API. And the main thread interfaces with it like this: Volley always delivers responses
on the main thread. What that means is that once cancel returns, you can be guaranteed
that your response will not be delivered. So here you can just keep track of all the
requests you have in flight and then cancel them in onStop. You know that when onStop
returns and you’re no longer allowed to touch the UI, the request won’t be delivered, so
you don’t have to check for null at the top of all of your listeners.
But you can actually do this with async task, too.
So how about this? I mentioned Volley lets you set up blocks
or scopes of requests. It lets you tag any request with an arbitrary object that you
can use to set up a bulk cancellation scope. So in this example, you tag all the requests
with the activity they belong to, and then you simply pass those to cancel all in onStop,
and you can be guaranteed that you’re not going to get any more responses.
You don’t have to use the activity. You could define a smaller scope, like, say you want
to group all thumbnail requests from one view pager tab and cancel them when you swipe away.
You can do that, too. If you really want to go nuts, you can actually
specify a custom filter deciding which requests to keep and which requests to cancel. Say,
for example, you’ve got a post request in the background and you want to let that go,
but you want to cancel all your thumbnails. You can do that, too.
Okay. So we’ve covered how easy it is to build an app with Volley, or even port your app
to Volley, and how to grow up from there. Just on the porting topic, I want to mention,
those of you who have the Google I/O 2013 app, that was ported to Volley about a month
ago, and the whole thing took about half a day. And it wasn’t me who did it, by the way.
So you don’t have to be an expert. We’ve again seen those complicated cases defining
custom requests, doing, you know, more advanced cancellation operations. And you can do it
with a lot less code and a lot more functionality than you could in rolling your own. It’s easier
than doing it yourself, and you get the benefit of all of those — all those lessons we’ve
learned the hard way in developing our apps at Google.
But it’s also faster than doing it yourself. It’s also faster than doing it yourself. The
Google+ team did a benchmark last year after bunch of different networking libraries and
Volleyed every single one. They ran on FroYo, through Ice Cream Sandwich and Jellybean,
on different types of hardware. They ran on EDGE, 3G, 4G, wi-fi. Volley was faster every
single time, in some cases by a factor of 10.
Okay. So you’re sold. How do you get started? It’s pretty easy. Just clone the project.
We have the repository URL up on the screen. From there it’s pretty simple. There’s an
Eclipse project — I guess we will need to add an Android Studio one — right there in
the repository. There’s also a build.xml file so you can use
add to build yourself a jar or you can just drop your code into your app, any of those
things. You call volley.newrequestqueue, start adding
requests to the queue and you’re off and running. That’s pretty much it.
Anything else? That’s all I have. Thank you very much for coming.
[ Applause ]>>Ficus Kirkpatrick: I did want to leave time
for Q and A, and I will also be at office hours after this or you can hit me up on G+
or Twitter. So we can take some questions.
>>>Hi, my name is Alexi with (Indiscernible). Can you talk about size of the in-memory cache
for bitmaps, disk size of the cache for bitmaps? And if you hit the disk cache, on which does
the bitmap decoding happens?>>Ficus Kirkpatrick: That’s a great question.
So the sizes of the caches are completely configurable by you. You can choose the in-memory
cache size and the disk cache size separately. All blocking IO is done on background threads.
We never do blocking IO on the main thread. But it’s actually important to do the in-memory
cache stuff on the main thread before you ever even get to the HTTP stack. The reason
for that being that you don’t want to leave back into the main looper again. You want
to know immediately that you have your bitmap so that you don’t defer, let a layout pass
happen and then set your image bitmap and you get a little flicker.
Does that answer your question?>>>[ Inaudible ].
>>Ficus Kirkpatrick: He asked about disk thread versus networking thread.
Caching — caching from the disk is all read on the cache thread. So there’s serialization
at the cache thread. Basically things can tend for the cache thread and get stuck if
one thing is reading from the cache. We have a design for how to split up cache workers
or defer the work somewhere else. We just haven’t needed it. It’s been fast enough.
>>>So my name is Mike from Abex Systems. I’m really happy that you guys were able to
encapsulate the caching logic. We had troubles trying to do that ourselves. My question is
one thing that we need to do is establish different links for cached objects. So for
a certain object we determine — we deem it as static versus dynamic and we cache it for
different lengths of time. Is that something that will be supported by this Volley structure?
>>Ficus Kirkpatrick: Yeah. So he’s asking about cache TTL basically.
So Volley’s cache implementation on disk respects all the standard HTTP cache shutters. So if
your server sets this thing expires immediately, it respects that. It doesn’t even get written
to cache. If you say this thing lasts for a day, this
thing lasts for a year, again, the expiration is respected.
We don’t have — there’s no TTL in memory cache. In our experience things just thrash
through way too fast, but again you can pass in your own implementation if you need to.
>>>Hi, Ficus. I’m Chuck from AWeber Communications. Thank you for this. I’ve seen a lot of different
approaches to solving this problem and written a few myself, and one common thing that seems
to occur a lot is the dreaded out of memory exceptions, and particularly when you’re dealing
with larger images and lower end devices. So I’m wondering if you saw how Volley compared
to other solutions and the number of occurrences of this and some tips for dealing with that
scenario.>>Ficus Kirkpatrick: Yeah, we definitely struggled
with out of memory errors a lot. It’s a tough problem because if you have a lot of memory
to use you can make things fast. The solution that we ended up on that worked
really well for us was a couple of things. One, I mentioned that we don’t decode more
things concurrently than there are CPU cores. That actually ends up having a pretty positive
effect on your kind of instantly measured heap pressure.
And the other thing that we do is that we actually in the Play Store we specify the
size of the in-memory cache as a function of the screen size, which kind of makes sense,
right, because you typically want to have as much in cache to fill three screens’ worth
of data. As it turns out, devices are typically built
with the memory to support the screen that they need to fill, so it tends to work out
somewhat coincidentally. I would say this isn’t really a Volley thing.
I think it’s a “how to choose your cache size” thing.
And so we don’t use soft references or weak references. They don’t work well. We use hard
references and set a strong budget. And we are conservative in setting that budget by
scaling down with the screen and really with the number of pixels.
>>>Thanks.>>>Hi, my name is Mark. I would like to know
when is this going to make it into the SDK manager?
And the second part of the question is lots of images, small size, this sounds a lot like
the network protocol SPDY. Are you planning on integrating SPDY into Volley?
>>Ficus Kirkpatrick: This is easy to answer. I don’t know.
It’s easy enough to get cloned. I don’t think we have a great answer for that, so I will
duck that and answer your question about SPDY. I’m really excited about SPDY. Volley was
— it was on my mind when we were designing it.
Somebody right now is working on putting the OkHttp library that Square just released — you
may have seen it — as an underlying transport stack for Volley.
I haven’t tried Ok with SPDY yet, but according to Jesse, the author, you can plug SPDY into
the back of OkHttp, which would pretty much give you SPDY in Volley.
So yes, you can do it. It doesn’t work yet, but we intend for it to work really soon.
>>>That’s great.>>>Hey. One of the things that we have found
is that a lot of our activities tend to have a mapping to like an API call. And what we
would like to do is basically make the API call before we do the transition to the activity.
A lot of these requests can’t actually be cached or shouldn’t be cached.
Is there any way to serialize your request objects?
>>Ficus Kirkpatrick: Yes. You can kind of roll your own or the really dirty thing that
we have done before is you make a separate request queue with one worker thread and then
things are first in, first out if they’re the same priority.
So you can do it. It’s — the API is not optimized for it.
>>>Okay. Any plans to optimize the API for it or add it as a feature?
>>Ficus Kirkpatrick: No. Traditionally what we do in the case where we have API calls
that need to be made in the serial is we initiate the second one from the response handler of
the first one.>>>Oh, sorry. By serialize the API call to
action, I mean like parse it so that you can pass it across the activity boundary.
>>Ficus Kirkpatrick: I see. So what I would do in that case, I think, is just pass the
URL of your request or the metadata in a parsable or something, right?
>>>Right, but then the request isn’t going to start until after you make the transition
across activities.>>Ficus Kirkpatrick: Ah, I see. So I think
your option there is going to be to kind of shove it into static somewhere.
I don’t think there’s a great answer to your question.
>>>Okay. Thanks.>>>Hi, my name is Jonathan. I just have a
quick question. In all your examples you were using JSON request and responses.
I was wondering if there was any way to use different types of protocol like XML, for
example?>>Ficus Kirkpatrick: Sure. Yeah, I think I
just forgot to mention this. We use protocol buffers pervasively in the
Play Store, and the Play Store runs on Volley. Yeah, you can use XML, you can — I used JSON
because it’s easier to read on the slides, but yeah, we use protobuf, you can use XML.
We have some things that are done as raw strings. Obviously images are all just formats of requests.
>>>Okay. So it should be pretty easy to plug in whatever we want to use?
>>Ficus Kirkpatrick: Very much. And if you look at the gist I showed in the presentation
to give you a starting point of writing like your own XML particular request.
>>>Okay. Sounds good. And last question: Is there anything built into end all with
three tries in case of request barriers and stuff?
>>Ficus Kirkpatrick: Yes, it does have a retry for you. I didn’t have time to cover it right
now, but can you set a custom retry policy, you can set back off algorithm, all that stuff.
>>>All right. Sounds great. Thanks a lot.>>Ficus Kirkpatrick: So I’m going to hold
questions for now. I’ll be at office hours if you guys want to talk to me in person.
Thanks again for coming. Please scan the QR code to rate this session unless you didn’t
like it. And thanks very much. [ Applause ]

5 thoughts on “Google I/O 2013 – Volley: Easy, Fast Networking for Android

  1. Amar Nath Post author

    wow!!! what a nice presentation ,really very very useful..Great wok Volley Team..

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *