Data Mining for Fund Raisers
This is a repost of a Goodreads' review I did a little over 4.5 years ago, for a book I read twelve (12) years ago, which seemed relevant, as the industry seems to be picking up a data-driven focus. Plus, the world is now being transformed by advances in machine learning, particulary deep learning, and the large data sets and complexity of donor actions should greatly benefit from analysis.
Data Mining for Fund Raisers: How to Use Simple Statistics to Find the Gold in Your Donor Database Even If You Hate Statistics: A Starter Guide
by Peter B. Wylie
My rating: 4 of 5 stars
My spouse, at times a development researcher of high-net worth individuals, was given this book because she was the 'numbers' person in the office. Since my undergraduate was focused on lab-design, including analysis of results using statistics, I was intrigued and decided to read it. Considering my background, I found some of the material obvious, while other aspects were good refreshers on thinking in terms of statistics.
Below is the synopsis I wrote at the time I read it:
Purpose of Book
- To provide a general outline of a statistically-oriented method to improve funding activities by mining your current donor database
- To provide general techniques for analyzing data, as well as provide cautions against bad techniques
How the Process Can Improve Endowment Activities
- Allows the organization to more accurately target quality prospects, either to increase participation rates, or to find major givers more inclined to donate
- Allows the organization to reduce costs, or more effectively use limited resources, i.e., phone smaller sets of people, limit the size of mailings, while increasing donations
Outline of Method (Non-Technical)
- Export sample of donor database
- Split sample into smaller components
- Find relationships between donor features and giving
- Select the significant variables
- Develop scoring system
- Validate findings
- Test finding on limited appeals and compare results
- Assumes the donor data is extractable and randomized
- Requires export from donor database, or access via SQL
- Assumes additional software for statistics (DataDesk, SAS, SPSS)
View all my reviews
- Requires IT staff, analytical staff, donor contacts, and management to coordinate efforts
- Requires IT and analytical staff have adequate skills to implement
- Judges variables of data by both its intrinsic value and based upon its inclusion in database
Tips for Staying Employed as an Older Developer
A response to an article Tips for Staying Employed as an Older Developer
A bit about myself, older and working as a developer, team lead and project manager, writing here to add to the options for staying relevant, and how to let the world know about it.
- GitHub - I have several libraries in C# and F#, so that others can directly use and evaluate my code
- NuGet - Packaged versions of code shared on GitHub
- Blogs - lately I have been learning data languages, and have several, focused on design patterns and algorithms, as well as one focused on data analysis using R, Python, and F#
- Websites - I have several sites, along with blogs, all accessible from a primary site, James Igoe. This site has links to other sites and blogs, one of which is an older site where I share code as downloads - this site predates GitHub - in my core languages, VBA, C# and SQL, tools for doing programming interviews, as well as cheat sheets.
- Reposts of career and tech-related articles on LinkedIn, GooglePlus (communities), Twitter, Facebook (page)
- Training - Yes, like others I am always learning, but I also share the material I work through and my opinion about it, meaning writing book reviews and sharing my opinion on courses from Pluralsight.
Value-at-Risk (VaR) Calculator Class in Python
As part of my self-development, I wanted to rework a script, which are typically one-offs, and turn it into a reusable component, although there are existing packages for VaR. As such, this is currently a work in progress. This code is a Python-based class for VaR calculations
, and for those unfamiliar with VaR, it is an acronym for value at risk, the worst case loss in a period for a particular probability. It is a reworking of prior work with scripted VaR calculations
, implementing various high-level good practices, e.g., hiding/encapsulation, do-not-repeat-yourself (DRY), dependency injection, etc.
- Requires data frame of stock returns, factor returns, and stock weights
- Calculate and return a single VaR number for different variance types
- Calculate and return an array of VaR values by confidence level
- Calculate and plot an array of VaR values by confidence level
Still to do:
Note: Data to validate this class is available from my Google Drive Public folder
Calculating Value at Risk (VaR) with Python or R
The following modules linked below are based on a Pluralsight course, Understanding and Applying Financial Risk Modeling Techniques
, and while the code itself is nearly verbatim, this is mostly for my own development, working through the peculiarities of Value at Risk (VaR) in both R and Python, and adding commentary as needed.
The general outline of this process is as follows:
Load and clean Data
Calculate historical variance
Calculate systemic, idiosyncratic, and total variance
Develop a range of stress variants, e.g. scenario-based possibilities
Calculate VaR as the worst case loss in a period for a particular probability
Review: The Systems View of Life: A Unifying Vision
My rating: 5 of 5 stars
An excellent, incredibly insightful and informative book, somewhat marred by the tedium experienced in the authors' rehashing the ideas of organizations working for change. For most of this book, the writers masterfully tie together concepts in systems, mathematics, consciousness, the environment, society and biology, and for that, it is a brilliant read.
The Systems View of Life: A Unifying Vision
by Fritjof Capra
View all my reviews