Showing posts with label Dev. Show all posts
Showing posts with label Dev. Show all posts

Friday, May 26, 2023

Course Summary - DeepLearning.AI - ChatGPT Prompt Engineering for Developers

  • TL;DR

    • Enjoyed learning about the emerging field of Prompt Engineering via this course taught by Andrew Ng ( DeepLearning.AI ) and Isa Fulford ( OpenAI ).

    ----------------------------------------------------------------------------------------------------------

      ----------------------------------------------------------------------------------------------------------

        • Chapters

          • Introduction
          • Guidelines
          • Iterative
          • Summarizing
          • Inferring

            • Extracting Key topic(s)
            • Extracting Sentiment, and Sentiment score(s)
            • Executing multiple task(s) with a single prompt

          • Transforming

            • Language translation
            • Inferring a language
            • Multiple translations
            • Universal translator
              • Multiple input and output language(s) are supported
            • Tone transformation
              • Make some text more / less compelling
              • Make some text more / less formal
            • Translate across format(s)
              • For example JSON to HTML
            • Spellcheck / Grammar check

          • Expanding

            • Temperature input parameter can be used to control the level of randomness of the response.
              • For more 'production' grade applications, it is recommended to set this to 0.

          • Chatbot
            • In this mode we can setup the context for a conversation using a system prompt, and carry it forward.

            • Model(s) are stateless, and for carrying out a conversation, all prior context must be provided at the time of the interaction.

            • In a few line(s) of a Code, a full-blown Pizza Order Entry Chatbot was written, which accurately interacted with the user and then captured the order.

          • Conclusion

          ----------------------------------------------------------------------------------------------------------

            • Important Terms

              • Zero-Shot Learning

              ----------------------------------------------------------------------------------------------------------


                • Key Learning(s)

                  • With GPT it is possible to have a single model that performs multiple language task(s), in a matter of a few minute(s). 
                  • Prompt Engineering
                    • Writing clear and specific instructions is important.
                    • OPEN AI API can be used for programmatically interfacing with the GPT model(s).
                    • Delimiters are important to identify different parts of your prompt.
                    • Output(s) can be easily modified to different format(s).
                      • JSON Output can be very helpful for ingestion into program(s).
                  • Learnt following interesting Python Package(s):  
                    • Redline(s)
                      • It can be used to programmatically execute diff(s) between two pieces of text, and display it in a visually clean manner. 
                    • Panel
                      • Can be used to rapidly spin-up user interface(s), for example, within Jupyter Notebooks. 

                  ----------------------------------------------------------------------------------------------------------

                Monday, May 22, 2023

                Restarting Android Development

                •  Off-late, I have developed some level of interest in getting back into the Mobile development space. 
                • Just like when I started originally, I am starting it off with Android development:
                  • This is primarily driven from the relatively open nature of Android development, as well as the ready availability of hardware for testing. 
                • My short-term goals are to be able to build, test and deploy some applications which are able to perform on-Device Machine Learning tasks. 
                • I also want to understand the impact of Quantization, and mobile-optimization techniques on real-world performance. 

                More to come !

                Wednesday, December 30, 2020

                Time management ( especially at work )

                A collection of thoughts around time management and a recipe for achieving your goals, especially as applied to work. A 'light' / less structured version of the same can be applied even outside of work. It is essentially the Pomodoro technique, however, I did not know that this technique has a proper name when I started using it a few years ago.

                • Every week, I try to plan out my next week and wrap up this planning activity by Friday of the preceding week. 
                  • This allows me to be better prepared and more deterministic about my next week.
                    • Note that distractions, and high priority events will always happen, but other than that, it allows me to make sure that my time allocations are aligned with what I set out to achieve.
                • I assume that you already have a list of short term and long term goals and deliverables that you want to accomplish. 
                  • If you don't, then that would be the first step.
                • Ensure that you carve out time, via Calendar-based events for all the different goals that you have decided on a short term and long term basis. 
                  • This is probably the most important step, because this allows you to evaluate the importance of each goal against the time you have available. In other words, in this step, you are putting your money ( attention ), where your mouth ( time ) is. It also allows you to prioritize your goals in a more concrete manner, rather than amorphously thinking, I will accomplish xyz. If you are unwilling to carve out the time for your goal, then the goal is probably not that important to begin with !
                  • The time you carve out for each goal, should be proportional to the importance of that goal.
                  • Ensure that you have proper calendar events for those specific carved out times. 
                    • Make sure that you carve out the time as 'free' on your Calendar, so you continue to be available to your team if a high priority issue comes up and if a meeting needs to be setup at the last moment for a discussion.
                  • Make sure that the duration of such chunks of time matches the stretch of time you can focus well. In other words, if you can focus continuously and be productive for x minutes, then make sure that the chunk is of duration x minutes.
                  • Keep a short mental break of say 5 minutes between such chunks to allow you to context-switch to the next task / goal.
                    • Sometimes, I did not keep this short gap and it made context-switching pretty difficult, and also led to mental exhaustion and potential burn out.
                • For any meetings that you are an organizer of, make sure that the agenda is clear and well-defined. 
                  • Also,  have an expectation around which questions to anticipate and also from whom, which inputs will be needed. 
                  • If the topic / subject area could be big, it might make sense to meet with a smaller group ahead of time, to clarify an issue. 
                • Make sure that all the meetings you are a part of, have a well-defined agenda / goal. 
                  • If not then solicit feedback from the organizer about the goal for the meeting. This ensures that you are aware of what the expectations are for the meeting, and how specifically will you contribute to that meeting.
                • For One:One meetings setup some dedicated preparation time and follow up time
                  • For One:One meetings, setup some dedicated preparation time, in which you call out the high level talking points in terms of what you want to present, as well as discuss / ask.
                  • Additionally, setup some dedicated time after the meeting to follow up on the next steps / tasks that came out as a result of the meeting
                    • Some of the follow-up items might be high priority, and it is best to get those addressed then and there, or the items might be lower priority in which case those could be deferred to a suitable date.


                Saturday, February 23, 2019

                New MOOC on Self-Driving Cars

                If you are interested in the self-driving car ( / Autonomous Vehicles - AV ) space, and are looking for a more in-depth look at the various technical and non-technical challenges that need to be resolved before the self-driving cars can be common place in our society, then Coursera has launched a new course, which is available here

                It is structured as a Teachout, which is a format that encourages posing open ended question(s), soliciting potential approach(es) to solutions from student(s) and a learning format that encourages learning as much via peer interaction, along with the standard instructor-student interaction. 

                Monday, May 1, 2017

                Keras Documentation for different API level(s)

                Recently, I ran into an issue in which I was using an older ( 1.2.1 ) version of Keras, and the documentation online references documentation for just the latest version of Keras ( 2.x ). This is a useful website, which provide API documentation, for various level(s).

                Friday, March 17, 2017

                Self-Driving Car Algorithm Consideration(s)

                Just listing out some considerations / parameters for a self-driving car algorithm : 
                • Are lane markings clear ?
                • What is the traffic in the lane immediately ahead ?
                • What is the traffic in the adjacent lane(s) ?
                • What is the approaching / predicted curvature of the roads nearby ?
                • Are any pedestrians approaching ?
                • What do the traffic signs suggest about the speed limit ?
                • How to sense the driving condition(s) and what would be the safe speed limit for the current driving condition(s) ?
                • How to sense if an emergency vehicle is approaching ? This could be done by audio / visual techniques.
                • For a situation in which an incident cannot be prevented, how to minimize loss of human life ( and secondarily property ) ?
                • View the trend in brake light(s) of cars up front, so that if the cars ahead start to activate their brake light(s), then a speed slow down could be implemented.

                Tuesday, June 9, 2015

                When [NSFileManager defaultManager] became nil

                A few days back we faced an interesting problem. To use / manipulate a locally stored file, we were using the [NSFileManager defaultManager] to obtain an instance of the default File Manager, and then were trying to access the file. However, whenever this code fragment was executed the app would crash. We setup up breakpoint(s) to try to isolate the issue, and during debugging it showed up that on the very first line, where we tried to obtain a reference to the defaultManager, the instance was a valid one, but the moment we actually tried to use the reference to do something useful, the instance showed up as nil. We checked online as well, and couldn't find any references to other having a similar problem and spent some time being perplexed by why would the defaultManager be returned as nil. 

                After sometime when we didn't make progress on resolving the defaultManager issue, we decided to check on the path to the file that we were trying to access and found that the path was invalid due to a bundling issue, which was in turn invalid because of build script issue. We immediately resolved the build script issue, and subsequently the defaultManager returned a valid reference ( as expected ! ). While we were happy at resolving the issue, we were also hoping that the defaultManager didn't show up as nil in the debugger, which led us to spending quite a bit of time in trying to figure that one out ( which ultimately turned out to be a "ghost" issue distracting us from the real problem ).

                Friday, May 15, 2015

                Portfolio

                A collection of link(s) representing some of my code, apps, certifications and profiles.
                ---------------------


                Certifications


                ---------------------
                Code

                ---------------------

                Apps

                - Developed / architected significant modules for the following top-rated consumer applications.

                - Developed and architected significant modules for internal mobile applications used within Fidelity by Portfolio Managers and Research Analysts.

                ---------------------

                Profiles

                ---------------------

                Blog 

                ---------------------

                Photography

                ---------------------



                Wednesday, November 6, 2013

                A brief review of Nexus 5 and KitKat

                So, after a long wait, the world's 'most leaked' phone a.k.a the Nexus-5 was finally launched last week. Most of the information with regards to it's specifications had already been leaked multiple time prior to it's official launch. Thanks to Android Police for tracking the final official launch of the Nexus-5. What follows below are some observations after a few hours of using the phone.

                About this review

                This is going to be a personal ( and subjective, i.e. non-benchmarked ), at a glance review of the (just received) Nexus 5, and Android 4.4. As of right now, Android 4.4 is only available on the Nexus 5, but since Google has reduced the memory foot print of Android 4.4, so it should be spreading to other devices pretty soon. The official exclusion of Galaxy Nexus ( at least as of now ) is surprising and disappointing to me, however.

                General comments with regards to the size
                My general comments with regards to the size are that coming from a Galaxy Nexus, the Nexus 5 seems a tad big to me. However, it is of the a size which I feel I will get used to it in a few days just as I got used to the Galaxy Nexus when I first got it. Nexus 5 is also marginally bigger than the Nexus 4.

                General comments with regards to performance
                The phone has a solid overall performance in terms or launching apps for the first time, restoring currently running apps, switching between apps via the task switcher, overall fluidity and refinement of transparency and motion animations. I believe that Google hit a sweet spot in Android UI design from Android 4.0 onwards, and Android 4.4 builds up on top of all the incremental updates to improve the overall look and feel across the board. In common tasks, like browsing, listening to music, watching videos etc, I didn't find a significant change between the Nexus-5 and the Nexus-4, which is not a bad thing at all. The Nexus-4 ( with 4.3 ) screams while performing these common tasks and Nexus-5 incrementally improves that. I am sure that for highly performance intensive tasks, Nexus-5's updated processor truly shows it's power. Indeed, in some tests it has shown itself to be one of the best performing Android phones in the market today.

                General comments with regards to Android KitKat 4.4
                Some of the features of the software package on Nexus-5 are listed below:
                1. Google Now is just a left swipe away due to a totally re-designed Launcher App.
                2. Google Now is also accessible from the home-screen via the keyword 'OK Google'. This is still not as nice as the 'Always-On' mode of Moto-X, or the ability to launch Siri from any screen in iOS devices.
                3. Minor improvements like transparent status bar, and bottom menu bar.
                4. The Camera app contains an HDR+ mode which actually takes multiple pictures at multiple exposures, and combines them. This is different than the traditional HDR Camera apps which do not follow this process. It should also be noted that the Camera is Optically stabilized, which is useful for certain shots when your hand may not be stable.
                5. A full-screen mode which is useful for Apps like Games, or Videos, or for reading eBooks.
                6. Higher security with SELinux in enforcement mode.
                7. Typing in Hindi is now much easier.
                8. A foray into a totally new run-time known as ART, which promises to significantly boost overall performance.
                9. Number of home screens is unlimited.
                10. Pedometer like functionality is now a part of the Nexus-5, KitKat combination so devices like Fitbit etc have a new challenger.
                11. Business Caller-ID is now integrated as part of the Dialer App. What this means is that if you receive a call from a Business that Google has catalogued, then you will be presented with the relevant information when they call you. It also means that local Business search is integrated as part of the Dialer App. So if you want to order pizza, you don't need to open the browser and search for Pizza, but instead go straight to the Dialer App, search for Pizza there and Google starts showing local options immediately. There are plans to expand this to Individual Caller-ID as well.
                12. Expanded developer options.

                Q. Why did you choose a Nexus device ? Isn't Android Phone X better in parameter Y than a Nexus device ?
                A. One word ( or two actually ) - OS updates. To me Android OS updates coming straight from Google, without having to wait for carriers, and/or OEM ( Original Equipment Manufacturers ) is one of the most important aspects of owning a device. Staying on the latest platform, not only ensures that you receive the obvious visual features, but also most importantly, the latest security patches and updates.

                Q. What about the cost ?
                A. Unlocked - 16 GB ( $350 ) , 32 GB ( $399 ). For the above mentioned performance and features, this is an unbeatable price since it compares favorably with many of the phones which cost double unlocked.

                Q. What about the battery ?
                I am expecting a battery life analogous or marginally better than that of the Nexus 4, due to the slightly larger battery. I will update this section once I have more info. It is interesting to note here that the Nexus-5 features some technology to reduce Battery consumption

                Additional Reading and Extras:
                1. Excellent overall summary of Nexus-5 and KitKat.
                2. Google's official listing of KitKat updates.
                3. KitKat's official video on Kitkat .
                4. Last but not the least, Google's official promotional video for Nexus-5.



                Sunday, July 7, 2013

                Week long stress test of Google Glasses


                The Origin
                • Google conducted an #IfIhadGlass campaign earlier this year, and a post of mine on Google Plus got me selected in the campaign. I had hoped to not just use it as an end user, but also to write apps for it. This review will focus more on usage of the device, as compared to a hardware breakdown of the Glasses.

                The Pickup
                • Around 1st week of June, I received notification from Google that I could pick-up the glasses within a month and I scheduled my Google Glass pickup from the Chelsea Market site on 29th June. At the time of Pickup, I took a 360 degree panorama of the site which can be seen here.  During the appointment, basic functionality like taking pictures, videos, setting up WiFi networks, making calls, navigation and 'Googling'-via-voice commands were demonstrated. The atmosphere was cordial and the folks there were responsive to the various questions that were posed. I was told that even though I can pickup the actual Glasses there, I would have to wait for sometime to get the actual detachable / modular Glass shades. This was a minor disappointment, since I had hoped to collect the entire kit in one go, especially after spending a decent chunk of money and physically making it all the way to NYC. Anyways, the pickup was right in time for my week-long trip to Vegas and Grand Canyon, where I had hoped to do a 'stress test' of the device. I thought that with the temperatures touching 118 deg F ( or 47.7 deg C ), it would provide an ideal atmosphere to test the Glasses in rough conditions.

                Construction
                • The build quality is absolutely top notch, and it also feels surprisingly light. Also, the Glasses accidentally fell down twice from a desk, but there were no scratches, or signs of damage. 

                Battery life
                • There has been justified criticism of the battery life of Glasses, which has been stated in certain blogs as being nearly 4 hours of heavy use. However, in my usage of the Glasses I was regularly able to take more than 100 pictures and videos over the course of the day on a single charge. Additionally, I don't believe that Google intends Glasses to be used as a continuous video consumption device ( primarily due to eye strain), but more as a here-and-now kind of a device. In that sense, Google Now is the perfect app for Google Glasses. Also, in that sense even though the battery life is an area of improvement, I found it adequate of bursts of interactions throughout the course of the day.

                Photos and Videos
                • The Glasses take decent pictures and videos in good daylight ( Sample1, Sample2, Sample3, Sample4 ) and also in moderate low light conditions ( Sample5 ). Glasses are excellent for catching fleeting moments, when you have a very limited window of opportunity. For example, we once came across Google's StreetView car, and were able to take pictures and videos of it easily with the Glasses. Glass's camera becomes nearly useless in very low light / night conditions ( as expected ). I don't expect that Google would be able to fix this anytime soon. They could either add a compact Flash, or increase the exposure time for the lens. Adding flash would have an adverse on the already scarce battery life, and increasing the exposure time is difficult because you would then have to keep your head absolutely stable for the duration of the exposure, which is not an easy task. At the moment, the Glass software automatically applies effects ( HDR tweaks ) to the pictures to enhance them and you don't have much control over the effects. This is an obvious area of improvement in the near future. Also adding timer(s) for photography would be very useful. Last but not the least, in a traditional camera you are able to frame the picture properly before taking it but currently in Glass you don't get that option. I guess in the future, this would be another good area of improvement with which you can get a live-preview of the picture that you are about to take. The pictures and videos get backed up automatically to Google+ when data connection is available, which good for backup.

                Sharing
                • The 'Taking-a-picture-and-share-it' loop is very straightforward. It is frictionless to the extent that one needs to be really careful while sharing pictures since it is such a compact loop. In the entire week of testing, there was only one incident where I accidentally shared a pic when I didn't intend to share it. The sharing function is tightly integrated with Google+, and you cannot help avoid the the feeling that Google is using Glasses to push Google+ ( pushing Google+ through all possible fronts seems to be Google's policy these days anyways).

                People's reactions
                • A bus driver at Grand Canyon as I was boarding the bus: "Is this your video monitor ?"
                • Random person 1 at Grand Canyon ( with a broad grin ): "Google Glasses, eh? Are you recording everything here ?"
                • An employee at a Grand Canyon Cafe: "Is this a magnifying Glass ?" ( Best reaction award goes to this one, in my opinion )
                • A security personnel at a gas station close to Valley of Fire: "What you got going on there ?"
                • TSA employees: No response at all.
                • Some folks looked suspicious of it, while others were in awe. 

                Limitations
                • Volume of the bone conduction speakers is too low, and they are not audible in any public place. They can be easily overwhelmed with the most minor noise.
                • The screen becomes useless in ultra-bright conditions. The only saving grace is that you can issue voice commands to still get functionality out of the device.
                • Google is still working on Glass version for people who need optical correction. Given the modular design, this shouldn't be difficult.
                • During the week of usage, there was an incident when I took a phone call with Glasses and the Glasses got 'stuck', i.e. they were repeatedly playing a sound even after I cut the call. I had to perform a hard reset of the device by pressing the power button continuously for 10 seconds. I guess this is a part of the pre-release experience. Anyways, this just happened once during the course of the week.

                Voice recognition
                • The microphones on Glass are impressively sensitive and accurate. Obviously, in strong winds you need to speak on top of the ambient noise if you want to perform voice commands. You also have the alternate option of using the touch-pad in such cases to navigate through the functionality, if you so prefer.

                Glass as a distraction?
                • After having used it for a week, I have to say that the Glasses are not as distracting as I had originally thought. One still needs to be responsible while using them, as with any other device like a smartphone.

                Software Upgrades
                • When I picked up my Glasses, they were at the XE4 firmware level. Subsequently after reaching home, i received the XE6 update, and then a few days back I received the XE7 upgrade. In other words, I received two updates during the course of one week. Google has plans of providing software upgrades every month, so there should be new features / fixes being made available on a fairly aggressive schedule. 

                Future success / failure
                • After having used it for a week, with having it on my eyes for almost all of my waking hours, I am still not convinced about the absolute future success or failure of the device. It could go either way. What I am sure of is that it's an interesting concept and Google is willing to aggressively improve it over time with feedback from #GlassExplorers, which should keep it interesting
                Additional Resources


                Monday, April 8, 2013

                Developing Services for Google Glasses - Notes from SXSW session

                Here are some notes from Timothy Jordan's talk at SXSW with regards to developing for Google Glasses using the Glass Mirror API. The complete video can be seen here. From now, unless specified otherwise, the term 'Glasses' used in the text below refers to 'Google Glasses'

                General Terms
                • Cards - A single screen being displayed to the user. The type of cards can be Text, Text + Images, Rich HTML and Video.
                • Bundle - A 'nested' Card. This card has a 'fold' at the upper-right corner, and can contain a collection of other Card(s).
                • Timeline - A chronological collection of Cards. Over a period of time, the various screens that you saw on Glasses get stacked on a 'Timeline'. This seems to be akin to a browser's history. You can use the touchpad to navigate between the various cards on a Timeline. It would be interesting to know how deep is this stack of cards, and also what happens if you want to delete a particular card in the timeline.
                • Share Entities - This is used to expose a Service's functionality to other Services. If you have a background with Android, this is akin to how Applications can expose their functionality to other  Applications, via Intents.
                Glass Mirror API 
                • The slides showed Google as being the intermediary between a developer's services and the end user's device.
                • In other words Glass sync takes care of Sync'ing between Google's servers and the user's device. This also means that a developer's service does not need to know if the user's device is currently on / off, or connected / disconnected etc.
                • The user needs to approve a developer's services so that they are allowed to enter Timeline cards in a user timeline. This is akin to an 'installation' process.
                • REST, OAuth2 and JSON technologies are used to push information to user's device from approved services.
                • For example if you want to post 'Hello World' on a user's screen, you would send the JSON payload {"text": "Hello world"} to Google's Mirror API end-point with the appropriate Auth Token.
                • There are two ways to create 'Bundles'. The first technique is 'Pagination' oriented. For more information, take a look at the video around the 20:42 mark. In the other technique, you create discrete cards, but provide a common bundle Id for the bundled cards, thereby creating a Bundle. 
                • To update an existing Card, you send an HTTP PUT request.
                • To delete an existing Card, you send an HTTP DELETE request.
                • To retrieve a card, you send an HTTP GET request.
                • Whenever you create a new Card, you can also specify actions/options associated with that Card. 'Reply' and 'Read aloud' are system options ( built into the framework ). You can also specify 'Custom' actions.
                • Subscriptions are a way by which the user's device communicates back to a Developer's services, via Google's servers. Subscriptions are therefore like the 'callback' mechanism for user's actions. A developer's service needs to subscribe in order to receive callbacks. Presumably you get an action's actionId in the callback, based upon which you determine the appropriate course of action to be taken. What concerns me about this is the response time for the end user since the information about a user's action needs to be sent to Google, then to the developer's servers and then presumably back along a similar route. 
                Guidelines
                • "Design for Glass" ( and ofcourse experience it on Glass !) - Design for quick interactions.
                • "Don't get in the way" for example via infrequent notifications, and by not showing Modal selections.
                • "Timely notifications" because Glass is a 'right here, right now' device.
                • "Avoid the unexpected", via transparency of functionality
                General Notes about Glasses
                • Glasses are never directly in-front of the eyes but at the right top.
                • One of the most common way(s) of interacting with the Glasses is via speech.
                • There is a touchpad to the side of the Glasses. The various gestures which were demonstrated were swipe forward, backward, up, down and tap.
                • 'Basic' head gesture(s) can also be used to interact with Glasses ( check-out the original talk video around the 13:45 mark )
                Demos
                • Sample Apps developed by NYTimes, Gmail, Evernote and Path specifically for Glasses were demonstrated.


                  Thursday, December 6, 2012

                  So how does Android support Device Diversity ?


                  Android attempts to address device diversity by the following techniques:
                  • By providing density specific resource folders. 
                  • By providing resource folders for screens with different sizes.
                  • By providing an ever upgraded Compatibility library which is used to extend the capabilities exposed on the later versions of OS, to older versions of the OS. For example, Fragments were introduced from 3.x onwards. However, by using the Compatibility Library, you can use Fragments even in Apps targeted to Android 1.6.
                  • By unifying the Tablet and Phone experience ( Android 4.0 onwards ). 
                  • Third-party libraries also exist which can provide similar capabilities ( or in some cases even better ) to extend capabilities to previous platforms. An example of this is, ActionBarSherlock.
                  The above are some of the ways which Android developers can use to tackle device diversity for their Apps.

                  .... and then Google+ entered


                  Google.com is one of the hottest properties on the internet, and that has enabled the search giant to get to where it is. Several years passed since 1998 ( when Google was launched ), and then in 2005, Zuck launched Facebook. Gradually the engagement factor of Facebook became a force to reckon with. 
                  Google’s business model was to provide adverts highly specific to the search query, and it has been a grand success for them so far. One potential pro / con of this approach was the fact that the results were independent of who searched for the results. However, in real life, this is not the case. This is because the same search query can mean multiple things to different people. Google searches, while being highly relevant, were completely devoid of this calculation, and yet it all worked out great for Google…
                  Then Facebook entered, and with it the user specified a myriad number of their personal preferences, and choices, and likes and dislikes. All of this information was subsequently available to Facebook, and to the other third-party vendors who dealt with Facebook as well. The difference in the level of targeting that Google could provide, versus what Facebook could provide was astounding… This is because Facebook could not only take into account what the user was searching for, but also use a plethora of information about user’s likes, dislikes etc. and then subsequently present advertisements which were relevant not just from the search query POV, but also from the user’s POV…
                  I believe that Google did not anticipate an attack on their core business from this attack vector, and were complacent initially. Finally Google decided that social was an important angle to the whole targeted advertisement business, and tried to enter the social space. After a few ‘failed’ social experiments ( Orkut, Buzz ), Google finally launched Google+, and also updated it’s privacy policy to more broadly integrate user’s interactions across different Google properties. With this change, Google encourages users to log-in and stay logged-in across the different Google product offerings, and also enables Google to learn more about the logged-in user. Slowly, but surely, Google is on it’s way to building a solid social graph…

                  Saturday, December 1, 2012

                  Notes from Google I/O 2012 session - ” What’s New in Android ( 4.1 ) “


                  The official page for this session is here . The session is specifically about the latest iteration of the Android OS, i.e. Android 4.1 / JellyBean.

                  Google has now also made available the entire changelog, which is another good source of information about JellyBean . 

                  Below are my notes from this session, organized by category.

                  Performance, Memory
                  • Using V’sync + Triple buffering to make overall performance much better ( or much butter - as google likes to call it ! ).
                  • Non-editable TextViews use less memory.
                  • New Memory inspection APIs introduced for the application to be able to better inspect system memory, and then respond to it.
                  • RenderScript updates.
                  • Ability to cancel Database queries.
                  Widgets
                  • Android Widgets can now be hosted in third-party launchers.
                  • Widgets can respond to size changes. In Android 4.0, while re-sizing widgets was possible, intercepting this event was not possible. However, intercepting this change is now possible with Android 4.1, and consequently you can do stuff like update layouts etc.
                  • Widgets can now have different layouts for portrait mode, versus landscape mode.
                  Layout
                  • The new layout called as ‘GridLayout’ which was introduced in Android 4.0 for Activities / Fragments, is now also available for Widgets.
                  • GridLayouts were specifically created to solve the problem of deeply nested layouts for certain use cases in a more performant manner. 
                  • TextureView : Enhanced SurfaceView essentially.
                  • The above layouts are not Android 4.1 specific, but were actually introduced in Android 4.0 .
                  Animation
                  • Multiple Animation related updates to make animations easier.
                  • Activity Animation updates. Now, you can easily animate the ‘zooming out’ expansion of activities from a specific point + dimension on the screen. Android 4.1 JellyBean is replete with examples of this functionality / behavior.
                  ClipBoard
                  • ClipBoard can now hold styled text, I.e. Not just raw text.
                  Navigation
                  • Now you can manually create synthetic task stacks ! This is huge in my opinion. Also, this update is available within Google-developed Android compatibility package, which goes all the way back to Android 1.6, so this should be available for us as well.
                  • Automatic ‘Up’ navigation support for Activities, in context of Action Bar.
                  • Still no official Action Bar support within the Support package. Romain Guy recommended using ActionBarSherlock for this.
                  Internationalization
                  • 18 new locales, support for right to left text – Arabic, Hebrew.
                  Accessibility
                  • Enhanced in a major way. For visually challenged folks, you can perform gestures which will gradually traverse, and describe the various views to you, without looking at the screen. Once you choose the right area / view that you want to interact with, you perform another gesture ( double tap ) anywhere on the screen to execute the action. This way, you don’t have to figure out exactly where to tap. Also included is accessibility support to make complex Custom views more accessible. All of this exists in the support library, and therefore works all the way unto Android 1.6 !
                  Security , Permissions
                  • From now on, Apps that need to use external storage, need to explicitly request for this this permission. While this is not mandatory, at the moment, it will become mandatory in the future.
                  Networking, Throttling, $
                  • Android already supports determing whether user is on a WiFi network, or is using a regular cellphone network. However, this is a coarse-grained approach, for example, what if the user is on a metered Wifi HotSpot network ? In this case, the user can specify which networks are metered, and you can now query this setting from within your application before performing a network-intensive operation. 
                  MultiMedia
                  • Media Codec updates.
                  • Audio Latency improvements.
                  NFC
                  • Large payloads over Bluetooth, tap for pairing support.
                  Play Store
                  • You can respond to user comments, but this is for ‘Top Developers’.
                  • In-App subscription support.
                  • New seller countries.
                  • Entire Team can now access Android developer console.
                  • Sales report are now available.
                  • Android Expansion files -> Initial APK file can be unto 50 MB, and it can then be remotely augmented with extension files unto 4 GB.
                  • Incremental APK updates are now available, automatically ! Average saving ( data ) of 66 % per download, per Google stats.
                  • Unlocked devices now available directly from Google.
                  DevTools
                  • Emulator is much faster now, to the extent that you can run games on the same with good performance.
                  • Can test hardware acceleration, via Emulator.
                  • Sensor and multitouch support using physical Android devices. In this case, you actually run you application on the emulator, but can feed all sensor data + multitouch events using a connected Android device for thorough testing.
                  • ‘Lint’ Tool for automated checks of your code against Google recommendations.
                  • Tracer tool for Open GL ES.
                  • Device Monitor Tool. This is basically a newer version of DDMS, with a better UI.
                  • System Trace tool.
                  • Better NDK support.
                  • Support for creating standardized types of Applications.
                  • Layout editor updates.
                  Notifications
                  • New attribute introduced – priority. Support for opportunistic notifications. Opportunistic notifications are those which do not appear when the Notification drop-down is unexpanded, but show up once the user has pulled down the notification menu.
                  • bigContentView – 256dp tall ( 4 times previous contentViewSize )
                  • Notification Actions: you can add upto 3 buttons within your notification, from which the user can perform an action directly. If you want to add more than three buttons, you can use Custom Layouts.
                  • Styling updates with regards to Notifications.
                  • Notification sort order is first by priority, then by time.
                  • Users can now long-tap and find out which application posted a notification. If the user is annoyed by notifications coming from an application, they can just switch off the notifications from just that application.
                  Comments, questions and feedback are appreciated !

                  Enabling Android Webview to Ignore bad Certs..


                  Recently, in a small project I wanted to display a mobile optimized website inside of a WebView in a native Android App. So, I created the WebView and proceeded to load the website in it, and Voila ! it did not work ! I tried a few different settings for the webView, but each time I got a “Website not available” error.

                  Subsequently, I opened up the website on my desktop browser, and it worked great. Then, I opened up the website in the Android browser, and it opened up fine in either case. This was fairly baffling to me. [ Later I realized that in either case, I had set my browser to ignore bad SSL Certs ]
                  I was using a Custom webViewClient for loading the page, but was over-riding only three methods:

                  onPageStarted(WebView view, String url, Bitmap favicon)

                  onReceivedError(WebView view, int errorCode, String description, String failingUrl)
                  I had hoped that in-case of *any* kind of error, the onReceivedError would get triggered, but it was not getting triggered. After a little head-banging, I decided to take a deeper look at which other webViewClient methods I could over-ride and found some interesting ones.

                  onReceivedHttpAuthRequest(WebView view, HttpAuthHandler handler, String host, String realm)

                  onTooManyRedirects(WebView view, Message cancelMsg, Message continueMsg)

                  So, I proceeded to over-ride the above three methods as well. In my subsequent test, I found that the method “onReceivedSslError” was getting triggered ! This was excellent, because now I had a clue of where the problem could be, and which direction to proceed in. I subsequently went to my desktop browser, and stock Android browser, and made sure that I got prompted in-case of a bad SSL Cert. After that change, I could see that the stock Android browser, gave me a Dialog with three options, kind of like the image below.


                  The above image communicated to me that the SSL Cert was not trusted, and that all I would need to do to make it work, would be to ‘intercept’ this Dialog within the WebViewClient, and ignore the bad Cert.

                  After a little bit of digging, I found this post ( image above is sourced from the same ) from Damian Flannery’s Blog which mentions :

                  engine.setWebViewClient(new WebViewClient() {
                   public void onReceivedSslError (WebView view, SslErrorHandler handler, SslError error) {
                   handler.proceed() ;
                   }

                  The single bolded line of code above, will make sure that the webView will ignore bad Cert warnings, and continue to load the website. I made the above change, and it was all good after that. 

                  Hoping that this post could save someone’s time when faced with a similar issue in the future…


                  State of the Android....


                  A non-techie post highlighting where Android is, and where it is headed. Will be updated in due course of time as Android evolves and the use cases explode. :-)

                  Enabling Xcode like Auto-Complete in Eclipse…


                  1. Open Preferences in Eclipse. ( Command Key + , ) 

                  2. Type “Content Assist” into search box.

                  3. For each Editor that you wish to have code completion:
                  • Ensure “Enable auto activation” is checked.
                  • Put all characters into “Auto activation triggers for *” e.g.  .abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ_0123456789
                  • Set “Auto activation delay” to 500 (or whatever works for you).
                  Thanks to Matt 

                  Android Resources of note


                  A short collection of Android related resources that I like to visit / read:

                  HTML5 Financial/Trading Apps


                  While HTML5 is the future, some vendors have already started to promote / create HTML5 based financial / trading apps. Here is a sneak peek at some of those. These will be updated over time as new apps become available.

                  1. Kaazing App
                  Link: http://demo.kaazing.com/forex/

                  Blog Entry:  http://blog.kaazing.com/?p=641