Agile FAQs
  About   Slides   Home  

Managed Chaos
Naresh Jain's Random Thoughts on Software Development and Adventure Sports
RSS Feed
Recent Thoughts
Recent Comments

Why did the Stats dip in Agile India 2015 Conference?

August 8th, 2015

Agile India 2014 Conference was happy to host 1236 Attendees from 28 different countries. The attendees belong to 226 different companies and play 342 different rolesMore details

However in Agile India 2015 Conference we hosted 817 Attendees from 26 different countries. The attendees belong to 165 different companies and play 270 different rolesMore details

Also there was a proportionate drop in the number of sponsors. 14 sponsors in 2014 as opposed to 11 in 2015. So many people ask us why the numbers dipped? That’s a fair question. Following are the reasons why we think the numbers dipped:

  1. We moved from Four 1-day mini-conferences to Two 2-day mini-conferences. (So naturally the count will dip. In 2016, we are back to Five 1-day mini-conferences.)
  2. In 2015, we shrunk the program team size to 9 members from 29 members in 2014. Reason: we wanted to experiment and see what happens if we don’t decide the team upfront, but add members to the team only based on their contributions (esp. via the Submission System.) I guess that did not work out all that well. In 2016, we are back to a 26 member team that is decided upfront.
  3. Overall the planning for the 2015 conference was delayed. Only in Sep 2014 we started actively working on the conference. As opposed to starting in July 2013 for the 2014 conference. (For 2016, we started work in June 2015 itself.)
  4. Part of the reason for the delay was because, we were busy planning the Agile Pune 2014 Conference. Now planning 2 fairly large, international conferences on the same topic, 4 months apart, can lead to them competing with each other. Each year we do organise a bunch of smaller, regional conferences. However with the Pune conference we got bit ambitious. A good lesson learned.
  5. The themes selected for the 2015 conference was a repeat from the most popular themes from 2014 conference. In hindsight, that was a bad idea. Participants and Companies want something new every year. (For 2016, we have 5 brand new, relevant and trendy themes: Research Camp, Lean Startup, Enterprise Agile, Continuous Delivery & DevOps and Agile in the Trenches.)

I can go on…but you get the idea.

This does not mean we will stop experimenting. We’ve been successfully running this conference for 11 years and every year we try something new, something different. That’s what keeps the excitement & enthusiasm for us (a group of volunteers, with regular day-time jobs.)

A Bird in the Hand is Worth Two in the Bush

August 1st, 2015

In software world we call this speculative generality or YAGNI or over engineering.

IMHO we should not be afraid to throw parts of your system out every 6-8 months and rebuild it. Speed and Simplicity trumps everything! Also in 6-8 months, new technology, your improved experience, better clarity, etc. will help you design a better solution than what you can design today.

Just to be clear, I’m not suggesting that we do a sloppy job and rewrite the system because of sloppiness. I’m recommending, let’s solve today’s problem in the simplest, cleanest and most effective/efficient way. Let’s not pretend to know the future and build things based on that, because none of us know what the future looks like. Best we can do is guess the future, and we humans are not very good at it.

Agile India 2016 – Call for Proposals

July 29th, 2015

Agile India volunteers have started working on Agile India 2016 Conference. We are planning to host the conference at the same venue (Hotel Chancery Pavilion) in Bangalore from 14th – 21st Mar 2016 (8 Days.)

We are now open for proposals to following conference themes (and here are their theme chairs):

  • Research Camp (March 15th) – Jyothi Rangaiah and Ashay Saxena
  • Lean Startup (March 16th) – Nitin and Tathagat (ad interim)
  • Enterprise Agile (March 17th) – Evan Leybourn and Ravi Kumar
  • DevOps and Continuous Delivery (March 18th) – Joel Tosi and S Sivaguru
  • Agile in the Trenches (March 19th) – Ellen Grove and Leena S N

More details:

The conference will host 3 parallel tracks. The CFP Early Bird Submissions will close on Sep 10th.

Please submit your proposals at

Speaker Compensation:

Please speard the word:
Twitter: #AgileIndia2016 or @agileindia

Discount Code: Open Web & jQuery Conference – 22-25th July Bangalore

July 6th, 2015

Open Web and jQuery Conference

The jQuery foundation is making their first trip to Bangalore to bring together experts from across the field of front-end development to bring you up-to speed on the latest open web technologies. Get the inside scoop on front-end development, code architecture and organization, design and implementation practices, tooling and workflow improvements, and emerging browser technologies.

We hope that you can use this opportunity to share ideas, socialize, and work together on advancing the present and future success of the front-end eco-system.

More details:

Online Registrations

First 25 people to register at can avail a special 15% discount. Use discount code – Managed@Chao$


jQuery Speakers

  • Dave Methvin – jQuery Core Lead | President of jQuery Board
  • Kris Borchers – Executive Director of jQuery Board
  • Scott González – jQuery UI Lead
  • Bodil Stokke – Functional Programming Hipster
  • Darcy Clarke – Co-Founder, Themify
  • Eric Schoffstall – Creator, Gulp
  • John K Paul – Organizer, NYC HTML5
  • Alexis Abril – Committer, CanJS
  • and 21 more speakers


1. Pre-Conference Workshops – Wednesday, July 22

  • Optimizing and Debugging Web Sites by Dave Methvin
  • Revolutionizing your CSS! by Darcy Clarke
  • Contributing to the jQuery Foundation by Kris Borchers
  • JavaScript: The Misunderstood Parts by Alexis Abril

2. Open Web Conf – Thursday, July 23
Talks on Functional Reactive Programming, ES6, Escher.jl,, CanJS, Ionic Framework, Kendo UI, Arduino, WebRTC and The Future of Video.

3. jQuery Conf– Fri, 24th & Sat, 25th July
Talks on The jQuery Foundation, Grunt, AngularJS, TDD in JS, Securing jQuery Code, Performance beyond Page Load, Responsive Web, jQuery Gotchas, Functional Reactive Programming, RxJS, ReactJS, Om, Memory Leaks, D3 and WebRTC.

4. Hackathon hosted by Joomla project – Friday 24th 2:00 PM – Sat 25th 2:00 PM
Details will be published shortly…


Big thanks to Freshdesk for supporting this conference as a Diamond Sponsor.


Hotel Chancery Pavilion, Residency Road, Bangalore


Facebook –
Twitter –
LinkedIn –
Website –

OS X Yosemite 10.10 + cURL 7.37.1 – CA Certificate Issue & curl_ssl_verifypeer Flag

June 28th, 2015

If you are using Opauth-Twitter and suddenly you find that the Twitter OAuth is failing on OS X Yosemite, then it could be because  of the CA certificate issue.

In OS X Yosemite 10.10, they switched cURL’s version from 7.30.0 to 7.37.1 [curl 7.37.1 (x86_64-apple-darwin14.0) libcurl/7.37.1 SecureTransport zlib/1.2.5] and since then cURL always tries to verify the SSL certificate of the remote server.

In the previous versions, you could set curl_ssl_verifypeer to false and it would skip the verification. However from 7.37, if you set curl_ssl_verifypeer to false, it complains “SSL: CA certificate set, but certificate verification is disabled”.

Prior to version 0.60, tmhOAuth did not come bundled with the CA certificate and we used to get the following error:

SSL: can’t load CA certificate file <path>/vendor/opauth/Twitter/Vendor/tmhOAuth/cacert.pem

You can get the latest cacert.pem from here and saving it under /Vendor/tmhOAuth/cacert.pem (Latest version of tmbOAuth already has this in their repo.)

And then we need to set the $defaults (Optional parameters) curl_ssl_verifypeer to true in TwitterStrategy.php on line 48.

P.S: Turning off curl_ssl_verifypeer is actually a bad security move. It can make your server vulnerable to man-in-the-middle attack.

Agile India 2016 – Call for Program Committee

May 1st, 2015

Agile India 2016 Conf is Asia’s Largest & Premier Conference on Agile, Lean, Scrum, eXtreme Programming, Lean-Startup, Kanban, Continuous Delivery, DevOps, Patterns and more…

This time we are hosting a mega eight-day conference, starting on March 14th (Monday), where experts and practitioners from around the world will share their experience. The number of parallel tracks will be decided based on the quality of proposals we get. We are hoping that conference will host at least 3 parallel tracks.

Overall Agenda (tentative):

  • Pre-Conference Workshop – 14th and 15th March (10:00 AM – 6:00 PM)
  • Research Camp – 15th March (10:00 AM – 5:00 PM)
    ** Research Paper Presentation
    ** Open Space
    ** Brainstorming on improving Industry-Academia Collaboration
  • Executive Leadership Conclave – 15th March (5:00 PM – 10:00 PM)
    ** Break-up
    *** Keynote – 60 mins
    *** Fishbowl – 90 mins
    *** Group Activity on Future Direction – 90 mins
    *** Cocktail Dinner Party
  • Lean Startup – 16th March (9:00 AM – 6:30 PM)
    ** Topics
    *** Customer Development (Product Discovery)
    *** Crafting MVPs & Safe-Fail Experimentation
    *** Design Thinking
    *** Lean UX
    *** Lean Delivery
    *** Actionable Metrics
    *** 90 mins Hands-On Workshops
  • Enterprise Agile – 17th March (9:00 AM – 6:30 PM)
    ** Topics
    *** Scaling Agile – Frameworks
    *** People (career) & Performance Appraisals
    *** Tools – Portfolio Management, Distributed Teams
    *** 90 mins Hands-On Workshops
  • Continuous Delivery & DevOps – 18th March (9:00 AM – 6:30 PM)
    ** Topics
    ** Culture Transformation
    ** Software Craftsmanship
    *** TDD/BDD, CI, Refactoring
    *** Evolutionary Design
    *** Test Pyramid
    *** Legacy Code
    ** Cross-functional Team Collaboration
    ** DevOps Tools – Build, Deployment, Monitoring
    ** 90 mins Hands-On Workshops
  • Agile in the Trenches – 19th March (9:00 AM – 6:30 PM)
    ** Topics
    *** Agile Challenges (20 mins experience reports only)
    *** Abuse of Agile (20 mins experience reports only)
    *** Agile Hacks – How did you tweak std. agile practices to work in your context (20 mins experience reports only)
    *** Agile Tools Ecosystem
    **** Visibility Tools – Project Management, Information Radiators
    **** Feedback Tools – Code Quality, CI, Deployment, A/B Testing
    *** 90 mins Hands-On Workshops
  • Post-Conference Workshop – 20th and 21st March (10:00 AM – 6:00 PM)

We need your help to pull this off.

Roles, Responsibilities and Compensation for Program Committee Members:

–> Over the next 10 months, you would be expected to dedicate 30 mins every day (including weekends) to fulfil your role. Only if you are sure you can commit to it, please apply.

DUE DATE: May 15th.

Apply here:

What is Agile’s Biggest Shortcoming?

April 11th, 2015

I’m surprised when people think Agile is perfect and if there are any shortcomings, its not the problem with Agile, instead, it is the person/team/org’s understanding or implementation issue. Some where along the way, the aspect that “We are uncovering better ways of developing software” was lost and agile became this static, rule-based prescriptive and dogmatic cargo-cult thing.

IMHO Agile has made a significant difference (some of it a a placebo effect as well) to the software industry however it has some serious limitations when you try to apply in beyond simple CRUD based applications:

  • Agile works well in linear or organised complexity domains where the problem is fairly well understood (static) and we need to find/evolve the solution iteratively and incrementally. But in domains, where:
    • the problem itself is unknown or constantly shifting,
    • the problem has a dozen or so variables that interact non-linearly. For ex:
      • in life sciences where we’re trying to understanding ageing/growth
      • in anti-terrorism where we have to deal with a crisis situation
      • when simulating chaotic systems like Indian traffic system
      • trying to predict outcomes in systems with distributed intelligence

applying agile values, principles and practices is not the best approach in these cases. We often find ourselves lacking the right kind of thought process and tools to be able to manage such project.

  • Event though the Agile luminaries claim that Agile treats software development as a Complex Adaptive System, they actually try to apply techniques that work in a Complicated Domain.
    • For example, given a problem, we analyse the problem, figure our a best-bet solution (set of practices), apply the solution, see what happens, do a retrospective and tweak the solution (inspect and adapt). This is how you work in a complicated domain. In a complex adaptive domain, we try a few independent safe-fail experiments to solve the problem, but most importantly we do all those experiments in parallel (set-based development approach), so we can really amplify good patterns and dampen bad patterns. We manage the emergence of beneficial patterns with attractors within boundaries. Its like running 5 parallel A/B tests and then coming up with a solution.
  • Agile folks seems to claim that distributed development is hard and you should always prefer collocation. But what about thousands of successful open source projects built by people who’ve never met each other? We seem to be missing something here. Open source project model seems to be way better at motivating people by giving them autonomy, master and sense of purpose. Most agile projects are not able to match this.
  • Today velocity and bunch of other vanity metric is killing agility. There seems to be so much focus on output and very little focus on outcome and learning. Agile has very little to offer in the space of customer development, business model validation, User experience and other important aspects required for a successful product launch. Which is what Lean-Startup movement is trying to address. This is clearly a limitation of Agile methods.

What’s your take?

Done with Definition of Done or Definition of Done Considered Harmful

January 26th, 2015

TL;DR: Definition of Done (DoD) is a checklist-driven project management practice which drives compliance and contract negotiation rather than collaboration and ownership. Its very easy for teams to go down rat-holes and start to gold-plate crap in the name of DoD. It encourages a downstream, service’s thinking mindset rather than a product engineering mindset (very output centric, rather than outcome/impact focused.) Also smells of lack of maturity and trust on the team. Bottom line: Its a wrong tool in the wrong people’s hand.

The Scrum Guide™ describes DoD as a tool for bringing transparency to the work a Scrum Team is performing. It is related more to the quality of a product, rather than its functionality. The DoD is usually a clear and concise list of requirements that a software Increment must adhere to for the team to call it complete.

They recommend that having a clear DoD helps Scrum Teams to:

  • Work together more collaboratively, increases transparency, and ultimately results in the development of consistently higher quality software.
    • Clarifies the responsibilities of story authors and implementors.
  • Enables the Development Team to know how much work to select for a given Sprint.
    • Encourages people to be clear about the scope of work.
  • Enable transparency within the Scrum Team and helps to baseline progress on work items
    • Helps visualize done on posters and/or electronic tools.
    • Aids in tracking how many stories are done or unfinished.
  • Expose work items that need attention
  • Determine when an Increment is ready for release

Also according to them, DoD is not changed during a Sprint, but should change periodically between Sprints to reflect improvements the Development Team has made in its processes and capabilities to deliver software.

According to the LeSS  website– DoD is an agreed list of criteria that the software will meet for each Product Backlog Item. Achieving this level of completeness requires the Team to perform a list of tasks. When all tasks are completed, the item is done. Don’t confuse DoD with acceptance criteria, which are specific conditions an individual item has to fulfil to be accepted. DoD applies uniformly to all Product Backlog items.


If you search online, you’ll find sample DoD for user stories more or less like this:

  1. Short Spec created
  2. Implemented/Unit Tests created
  3. Acceptance Tests created
  4. Code completed
  5. Unit tests run
  6. Code peer-reviewed or paired
  7. Code checked in
  8. Documentation updated
  9. 100% Acceptance tests passed
  10. Product Owner demo passed
  11. Known bugs fixed

Another example:

  1. Upgrade verified while keeping all user data intact.
  2. Potentially releasable build available for download
  3. Summary of changes updated to include newly implemented features
  4. Inactive/unimplemented features hidden or greyed out (not executable)
  5. Unit tests written and green
  6. Source code committed on server
  7. Jenkins built version and all tests green
  8. Code review completed (or pair-programmed)
  9. How to Demo verified before presentation to Product Owner
  10. Ok from Product Owner

Do you see the problem with DoD? If not, read on:


  • Checklist Driven: It feels like a hangover from checklist driven project management practices. It treats team members as dumb, checklist bots. Rather than treating them as smart individuals, who can work collaboratively to achieve a common goal.
  • Compliance OVER Ownership: It drives compliance rather than ownership and entrepreneurship (making smart, informed, contextual decisions.)
  • Wrong Focus: If you keep it simple, it sounds too basic or even lame to be written down. If you really focus on it, it feels very heavy handed and soaked in progress-talk. It seems like the problem DoD is trying to solve is lack of maturity and/or trust within a team. And if that’s your problem, then DoD is the wrong focus. For example, certain teams are not able to take end-to-end ownership of a feature. So instead of putting check-points (in the name of DoD) at each team’s level and being happy about some work being accomplished by each team, we should break down the barriers and enable the team to take end-to-end responsibility.
  • Contract Negotiation OVER Collaboration: We believe in collaboration over contract negotiation. However DoD feels more like a contract. Teams waste a lot of time arguing on what is a good DoD. You’ll often find teams gold plating crap and then debating with the PO about why the story should be accepted. (Thanks to Alistar Cockburn for highlighting this point.)
  • Output Centric: DoD is very output centric thought process, instead of focusing on the end-to-end value delivery (outcome/impact of what the team is working on.) It creates an illusion of “good progress”, while you could be driving off a cliff. It mismanages risks by delaying real validation from end users. We seem to focus more on Software creators (product owners, developers, etc.) rather than software users. Emphasis is more on improving the process (e.g. increasing story throughput) rather than improving the product. Ex: It helps with tracking done work rather than discovering and validating user’s needs. DoD is more concerned about “doing” rather than “learning”. (Thanks to Joshua Kerievsky for highlighting this point.)
  • Lacks Product Engineering Mindset: Encourages more of a downstream Service’s thinking  rather than a product engineering mindset. Unlike in the services business, in product engineering you are never done and the cycle does not stop at promoting code to high environment (staging environment). Studying whether the feature you just deployed has a real impact on the user community is more important than checking off a task from your sprint backlog.

What should we do instead?

Just get rid of DoD. Get the teams to collaborate with the Product Management team (and user community, if possible) to really understand the real needs and what is the least the team needs to do to solve the problem. I’ve coached several teams this way and we’ve really seen the team come up with creative ways to meet user’s need and take the ownership of end-to-end value delivery instead of gold-plating crap.

Self-Organised vs. Self-Managed vs. Self-Directed…What’s the Difference?

October 29th, 2014

Self-organised, self-managed and self-directed…do they mean the same thing or are they actually different concepts, where one might be more desirable over the other?

In the context of an “agile” team, people seemed to use these terms interchangeably. However, it’s important to note that there are subtle, yet worthwhile distinction between each.

Self-Managed Team

A group of people working together in their own ways, toward a common goal, which is defined outside the team.

For example – the CEO of a company decides to launch a new product to address the needs of a specific target market. An initial team is assembled with a budget and high-level timelines. This team decides how they wanted to operate within the given budget. Team will do their own work scheduling, training, rewards and recognition, etc. They typically do a 360 review and rated other team members for salary appraisal. Also the team manages itself and its stake holders. They collectively play the manger’s role.

Self-Directed Team

A group of people working together in their own ways, toward a common goal, which the team defines.

Usually, the team comes together for a common cause. In addition to the characteristics highlighted under the self-managed teams, a self-directed team also handles the actual compensation, discipline, and acts as a profit centre by defining its own future. In some sense, you can think of open-source projects resembling these characteristics. There is a big element of self-selection and built-in synergy.

Self-managed and self-directed have a noticeable differences in terms of autonomy and how they actually operate because of it. Listed below are attributes to consider when deciding how to structure your teams in your organisation:

Attributes Self-Managed Team Self-Directed Team
Goals Receives goals from leadership and determines how to accomplish their goals Determines own goals and formulates a strategy to accomplish them
Commitment/Motivation Requires frequent open-communication from leadership on company goals and objectives to build employee commitment and increases morale Team itself creates an environment of high innovation, commitment, and motivation in team members
Required Skills Conducting effective meetings, problem solving, project planning, and team skills Decision making, entrepreneurship, resolving conflicts, and problem solving techniques
Supervision Requires little supervision to track team’s progress and direction Prefers to work without supervision
Customer satisfaction Can increase customer satisfaction through better response time in getting work done and resolving important customer problems Can delight customers by focusing on innovation, problem solving and reduced cycle time (local, informed decision making)
Time to get team up & running Is relatively faster to get the teams to start working together, if the goal is given to them. Once they get started, they might face challenges due to lack of focus & motivation, but at least they will get started quickly Forming teams of high-caliber people, who can quickly converge on a common goal is hard. It can be expensive and time consuming to keep the team together and resolve conflicts. But once the team gels and get started, their performance is unmatchable.
Supporting Functions Requires some help from supporting teams like Learning and Development, Human Resource, etc. Pretty much self-contained; can manage with very little external support.
Executive Leadership Involvement Requires them to guide, motivate and track team’s direction. Requires a system that provides two-way communication of corporate strategy between leaders and their teams.

Hopefully, this highlights the difference between self-managed and self-directed. What about self-organised?

First let’s understand what self-organisation, as a phenomenon means.


Self-organisation is a process where some form of global order or coordination arises out of the local interactions between the components of an initially disordered system. This process is spontaneous: it is not directed or controlled by any agent or subsystem inside or outside of the system; however, the laws followed by the process and its initial conditions may have been chosen or caused by an agent. It is often triggered by random fluctuations that are amplified by positive/negative feedback. The resulting organisation is wholly decentralised or distributed over all the components of the system. As such it is typically very robust and able to survive and self-repair substantial damage or perturbations.

Self-organisation occurs in a variety of physical, chemical, biological, social and cognitive systems. Common examples are crystallisation, the emergence of convection patterns in a liquid heated from below, chemical oscillators, the invisible hand of the market, swarming in groups of animals, and the way neural networks learn to recognise complex patterns. Self-organisation is also relevant in chemistry, where it has often been taken as being synonymous with self-assembly.

Auklet Flock Shumagins 1986

Sometimes the notion of self-organisation is conflated with that of the related concept of emergence. Properly defined, however, there may be instances of self-organization without emergence and emergence without self-organization, and it is clear from the literature that the phenomena are not the same. The link between emergence and self-organisation remains an active research question.

Self-organisation usually relies on three basic ingredients:

  1. Strong dynamical non-linearity, often though not necessarily involving positive and negative feedback
  2. Balance of exploitation and exploration
  3. Multiple interactions

Self-organisation in biology

Birds flocking, an example of self-organisation in biology. According to Scott Camazine – “In biological systems self-organisation is a process in which pattern at the global level of a system emerges solely from numerous interactions among the lower-level components of the system. Moreover, the rules specifying interactions among the system’s components are executed using only local information, without reference to the global pattern.”

Real Question

Now let’s look at what is a self-organised team? Actually, the real question to ask is, what aspects of the team do they self-organise?

IMHO both self-managed and self-directed teams use self-organisation to achieve their objectives. Self-managed teams mostly self-organises to achieve their tasks, while self-directed team also uses self-organisation to form the team itself. It almost feels like self-managed/self-directed is one dimension (abstraction), while self-organised is a slightly different dimension (implementation.) While it feels like you cannot be self-managed or self-directed without self-organisation, I’m not 100% sure.

Program Committee’s Expectation from a Talk Proposal when Selecting It

October 12th, 2014

Over the recent conferences, I’ve had several people ask me the follow:

I would like to better understand the expectations from the organising committee on the talk proposals. In particular, I would like your feedback on my talk submission so that I can work on improving the same.

I think, this is a very valid question:

What is the selection criteria for the talks?

I’ve been organising conferences for a decade now and following is my perspective:

In terms of the overarching themes or values, we look at the following during selection:

  • Diversity  – As a conference, we want to be more inclusive (different approaches, different programming languages, gender, countries, back-ground etc.)
  • Balance – We want to strike a good balance between different types of presentations (expert talks, experience reports, tutorials, workshops, etc.) and different types of experience the speakers bring to the conference.
  • Equality – We encourage more students and women speakers. We won’t select a stupid proposal just because it came from a student or a female speaker. But given we have to pick 1 out of 2 equal proposal, we’ll pick the one, which was proposed by a student or a female speaker.
  • Practicality – People come to a conference to learn, network, have an experience and leave motivated. Proposals which directly help this are always preferred. While a little bit of theory is good, but if the proposal lacks practical application, it does not really help the participants. Also people learn more by doing rather than listening. If proposals has an element of “learn by doing” it wins over other proposals. Take people on a learning journey.
  • Opportunity – While we want to ensure the conference has at least 2/3 rock solid speakers, we also want to give an opportunity to new speakers, who have real potential
  • Originality – Original ideas wins hands-on from copied one. People always prefer listing to an idea from its creator rather than second or third person. However, you might have taken an idea and tweaked it in your context. You would have gained an insight by doing so. And certainly all of us want to hear your first-hand experience, even though you were not the creator of the original idea. We are looking for Thought-Leadership.
  • Radical Ideas – We really respect people, who want to push the boundaries and challenge the status quo. We have a soft-corner for unconventional ideas and will try our best to support them and bring awareness to their work.
  • DemandVotes on a proposal and buzz on social media gives us an idea of how many people are really interested in the topic. (We fully understand votes can be gamed, but we’ve a system that can eliminate some bogus votes and use different types of patterns to give us a decent sense of the real demand.)

Once the proposal fits into our value system, here are some basic/obvious stuff we expect when we look at the proposal in the submission system:

  • Is the Title matching the Abstract?
  • Under the Outline/Structure of the Session, will the time break-up for each sub-topic will do justice to the topic?
  • Is there a logical sequencing/progression of the topics?
  • Has the speaker selected the right session type and duration for the topic? For Ex: 60 mins talk might be very boring.
  • Has the speaker selected the best matching Theme/Topic/Category for the proposal?
  • Is the Target Audience specific and correct? Also does it match with the Session Level?
  • Is the Learning Outcome clearly articulated? Ideally 3-5 points, one of each line.
  • Based on the Outline/Structure, will the speaker be able to achieve the Learning Outcomes?
  • Based on the presentation link, does the speaker have good quality content and good way to present it?
  • Based on the video link, does the speaker have a good presentation (edutainment) skills? Will the speaker be able to hold the attention of a large audience?
  • Based on the additional links, does the speaker have subject matter expertise and thought leadership on the proposed topic?
  • Are the Labels/Tags meaningful?

Proposal stands the best chance to be selected, if it’s unique, fully flushed, ready-to-go. Speaker please ensure to provide links to your:

  • previous conference or user group presentations
  • open source project contributions
  • slides & videos of (present/past) presentations (other conferences or local user group or in-office)
  • blog posts or articles on this topic
  • and so on…

When selecting a proposal, we pay attention not only to the quality of the proposal, but also quality of the speaker, .i.e. whether the speaker will be able to effectively present/share their knowledge with others. Hence past speaking experience (videos & slides) are extremely important. If you don’t have a video from past conference presentation, that’s fine. Try to setup google hangout in one of your upcoming local user group meeting or internal office meeting, where you are presenting and share that link. This will give the committee a feel for your presentation skills and subject matter expertise.

While this might look very demanding, it is extremely important to ensure we put together a program which is top-notch.

    Licensed under
Creative Commons License