Category Archives: academic

Student success

The call came in at midnight. Medical emergencies often seem to be middle of the night events in life. She’d been transported from the residence hall to the emergency room. Chest pain. Difficulty breathing. Abdominal pain. Lower back pain. Severe pain. Both sides. I knew this was a fourth trip in as many days. Tests were coming back negative or inconclusive. While her condition deteriorated. As if a child of mine were in distress, I was headed out the door.

This time the hospital admitted her and, with one particular test providing a cause, put her on the appropriate medical treatment.

Word was passed along to her instructors that she had been hospitalized and was undergoing treatment. Two faculty members asked about her condition, asked to be kept informed as to how they could help. One of the two asked also whether the student was taking visitors – the faculty member wanted to stop by. Their immediate reaction was for the care and safety of the student. Beyond concern for her immediate condition, they also expressed a desire to help her succeed in their courses when she returns.

The third faculty member said only, “She missed a quiz and test already, she is likely to fail my course.” The faculty member did not ask about her as a person, expressed no concern over the distress the young woman was in. Just stated that she was headed for failure in their class. Cold. That was the only word that came to mind. Cold. No words of comfort. No assurance that the faculty member stood by ready to help the young woman once she had recovered. No commitment to her success as a student. Heck, no sign that the faculty member considered her a human being suffering from pain. No empathy at all.

I suggested as much, that right now her family and those of us who know her are a tad more concerned that she get well and recover than whether or not she took some particular quiz.

A commitment to student success can be an empty slogan. A trite over used cliche. Or one can ignore the chaff that now attends the term student success and, as teachers have done for millennia, show a supportive approach to the individual student as a person. Each student is a bundle of hopes and dreams, some parents’ loved and adored child, someone who, when they are in distress far from home, could use some empathy and care from those entrusted with their education.

I once had the privilege of attending a talk given by Paulo Freire, who was a Brazilian educator and philosopher. Prior to hearing him talk I had tackled some of his writings, but I found difficulty understanding the philosophical underpinnings of his writing. At the talk Paulo was asked, “In a word, what is education?” Paulo paused and then said, “Love. Education is love.” That I could understand.


Of learning and loss

Forces driving the financing of education, especially higher education, increasingly want to see that the education delivered prepares the student for the world of the workplace. Measures such as the number of graduates who succeed in obtaining employment in their field of study are used to gauge the success of a program. How often has someone said, “Education is the key to success” with the implicit meaning that the value of an education is what one does with that education beyond graduation.



In a higher education system increasingly driven by the value of education as a path to employment, what is the value of that education to one who will never become employed? One who is tragically lost to us. Rousseau in Emile first introduced me to the idea that an education should be of value to a child even if that child does not reach adulthood. And value for children is in having fun, enjoying life. An education should be fun. Enjoyable. An experience that is sufficiently wonderful that even if the child were to know that they will not live out the fullness of the years, the child would want to be in school. In elementary school. In high school. In college.

An education should be of value to a child in the here and now, an enriching and exciting experience, an adventure filled with wondrous wonders. Perhaps everyday will not be exciting, but net the experience should be positive.

Higher education at present is especially enamored of student learning outcomes and measuring learning. Learning is measured, assessed, analyzed, reported, and used to attempt to improve learning the next term. Few instructors rate whether their class is fun, exciting, interesting, something that the student would recommend to other students.

This is not a call to instructors to become entertainers, but rather a call to make the subject matter the instructor loves as interesting and exciting for the students as the subject is to themselves. And if an instructor does not love the subject they are teaching, then that instructor should not teach that subject, perhaps consider leaving education altogether.

An education should have value for the child, the student, in the here and now, in the present.

Halloween 2016

Halloween 2016 fell on a Monday school night evening. This was also a Monday social security day – the end of the month when senior citizens come to Kolonia to collect their social security checks and go shopping. That income is important in many families here, and falling on a Monday meant that the Halloween shopping weekend would likely have been negatively impacted. In local parlance, October 29 and 30 were a “broke weekend.”

Tristan and Kisha Halloween 2016
Tristan and Kisha Halloween 2016

The weather was acceptable, only a brief passing light rain shower in Dolihner, otherwise generally dry conditions.

Perhaps the largest factor was that last year Halloween fell on a Saturday night. A weekend with no school the next day.

Whatever the underlying factors, numbers were down year-on-year. Groups are a very roughly estimated with overestimation more likely than under. That said, the front porch saw a drop from 90 groups in 2015 to 79 groups in 2016. Traffic began around 18:35 but by 20:30 no further trick or treaters arrived on the porch.

Halloween group sizes 2015
Halloween group sizes 2015

Note the nine outlying groups in 2015 – groups with more than roughly 15 candy receivers, including one up near 45 and another above 50. The differential in the number of groups is a drop of only eleven. The lack of large groups, however, meant raw numbers of individual candy takers was down more significantly.

Halloween group sizes 2016
Halloween group sizes 2016

The numbers were down even more significantly. The count of candy receivers in 2015 was 590. In 2016 only 416 showed up on the porch, a drop of 174 trick or treaters. Average group size also dropped, primarily a function of the drop in the number of large groups and the absence of any group larger that 35. The household thought that the choice to block cars from driving up the interior road negatively impacted the large group counts. My sense is that the large trucks used to haul the big groups of kids from other parts of the island may not have been as available as they were on a Saturday night last night.

In 2015 the average group size was 6.56 with a standard deviation of 8.90. In 2016 average group size was 5.27 with a standard deviation of 5.50. The median, however, increased from 3 to 4 year-on-year.

We again used the dual bowl system. One twenty-five dollar bag of better candy and a single 330 count bag of Hershey Kisses. Elterina added in three bags of additional small candies that may have added upwards of 90 candies to the Kisses bowl. We would end the evening with candy on hand.

For those who want to play with the raw data, the data is available in a Google Sheets spreadsheet. Analysis was done using Google Sheets with the above charts prepared using the Google Statistics add-in for Google Sheets.

Reading books

A quarter century ago I often kept a book around, sometimes lugging the book around and catching a page or two on a city bus or commuter train. Moving to Micronesia meant that I could not wander into a book store, rummage the shelves, and find a book of interest. Occasionally the library would acquire a book of interest to me, or more rarely I would request that a particular text be acquired, but these were rare events.

My taste in books is both eclectic and not best seller. Books on statistics, physics, and running tend to hold my interest – genres that even the largest bookstores would carry in limited selection only. By the turn of the century Amazon had come into existence and provided a potential option. The books I preferred, however, were often hardback, expensive, and shipping to these islands always carries a probability of loss. Not to mention that once here, books decay in the heat and humidity. There is no building up of a personal library in the equatorial tropics.

Back in September 2014 I upgraded from a Nokia Asha feature cell phone to an LG Android smart cell phone. A trip in October caused me to add a Kindle app and a book to read on the long flight, with little thought to use beyond the one journey.

Although the LG is a small screen relative to the size of a book or a monitor, I was pleasantly surprised at the readability. In 1999 Bill Hill wrote at length about the “magic of reading,” bringing together research on ludic reading, Optimized Serial Pattern Recognition (OSPREY), and generating the immersive flow that accompanies reading at length for pleasure. The paper delved into fonts and screen resolution.

In 1980 computer monitor resolutions were too low to support fonts, let alone sustained reading for pleasure. In 1984 the Macintosh introduced screens with resolutions that could support fonts. By the 1990s increasing monitor resolutions suggested that screens would eventually equal the resolution of print products. I recall being in conversations about whether screens could or would replace the printed book. As an over-generalization, older readers felt that screens would never generate the flow and magic of books.

The rise of social media after the turn of the century caused an ever increasing number of people to spend significant time reading via a monitor. By 2015 reading done from a screen around campus clearly dominated reading from a book.

The Kindle book on the LG was a one off experiment for the purpose of a long flight, I did not expect that I would find readability and flow on the small LG screen. Once I discovered that I could enjoy a book on my cell phone, I continued to read after I returned.

The books were not free, but each cost less than a single night of stone sakau. Reading only happens in the interstitial moments between other daily tasks, thus a single book can last me a month. That makes reading a less expensive habit than weekend sakau, a definition of affordability for me.

Books in Kindle
Kindle shelf

Reading on the cell returns the ability to spontaneously grab a page or two of reading here or there. While waiting for a meeting to start, or in a bank line, or while sitting in the car waiting for the shoppers to finish shopping. No need to lug around a book, I have a small library tethered to my hip. I carry my books even when I am running, they do not slow me down.

I was looking at the shelf today and thinking that thought that so many educators have thought before me: doesn’t this change everything? Is this not a change on the scale of the Gutenberg press making possible school text books?

I do not know where technology may take education, I only know that after a quarter century I am reading regularly again. Technology has again changed my habits and my personal quality of life, in this case enriching life on a small rock in the Pacific ocean.

Pacific island dance judging analysis

Last Saturday I assisted with a judging a preliminary round of a Pacific island dance contest. This Saturday I was one of four judges for the final round. Four judges judges seven groups on five criteria. The criteria were modified from last week.

1. Movement. (Late movement, Turn the other way, keep looking at the partner, etc.)
2. Costume
3. Always smile? Ashamed? No singing with the music?
4. Well practice?
5. Performance as a group
Chewing gum and chewing betelnut and spitting: 10 point deduction.

Each criteria was worth ten points. Each group danced two dances, each dance could generate up to 5 of the 10 points in each criteria.

Bring it on girls
Bring it on girls

With four judges, five criteria each worth ten points, in theory there was a maximum of 200 points possible. At the end of the evening the rank order and points for the seven dance groups ranged from 152 to 182 points.

Dancers Sum
Bring it on girls, Pohnlik, Kolonia 182
Sista Sista, Ohmine, Kolonia 175.3
Young Roses, Paliais, Nett 174
Kapinga Pride, Pohn Rakied, Kolonia 162.5
G-Babes, Meitik, Nett 159
Ohnonlong, Wone, Kitti 158
Beauty Cousins, Madolehnihmw 152

Second and third were only separated by 1.5 points, fifth and sixth by one point. As a statistician I have a preference for rubrics that generate more spread. I am all too keenly aware that small differentials are not statistically significant and are not likely to be repeated. That said, a dance contest is not unlike a sprinting race, crossing the line a few hundredths of a second ahead of another runner is the difference between gold and no medal.

Each group had four scores, one from each of the judges. The range from the lowest score to the highest score was smallest for Kapinga Pride and largest for G-babes. G-Babes divided and decorrelated the judges.

Dance group score spread
Dance group score spread

Beauty cousins also saw a large range in scores.

The judges seven scores, one for each dance group, tended to distribute in a range from 35 to 45.

Dance judges score distribution for the seven dance groups
Dance judges score distribution for the seven dance groups

Two judges had rather symmetric distributions about median scores of 40 and 41.5. My median of 45 was high and asymmetric. My low score of 32 was not an outlier, but that was in part due to my large inter-quartile range. One judge had a low outlier and the highest upper whisker at 49.

The dance instructors, coaches, and advisers are most likely to want to know their strengths and weaknesses against the rubric used.  Overall the category “Always smile? Ashamed? No singing with the music?” (termed “Facial” in my analysis) scored the lowest.

Criteria Average
Costume 8.46
Facial 7.82
Movement 8.14
Performance 8.48
Practiced 8.62

One of the judges last week noted that she wanted to see more eye contact. Engage the audience, smile, show confidence, and show that you are enjoying yourself. I noted that one of the dancers seemed a little stiff, reserved, and was not moving as fluidly as I knew she could. I asked her and she said she was nervous tonight. During the free style, free dance at the end of the evening, however, her smile beamed out and she threw herself into competitive dancing with nothing short of gay abandon. All of her grace and fluidity were back.

One group looked over-practiced. They did not smile, just went through the paces. Perfect synch, no life, no zest. Maybe too many hours of practicing the dance over and over. The first place winners danced with confidence, big smiles, lively. They knew their moves, but they also appeared to be having fun with their dance and seemed to relish the spotlight. They were up there to bring it, as their group name suggests, and they were clearly excited at the change to perform in front of their friends and family.

Dance criteria averages by group
Dance criteria averages by group

The G-Babes were judged to have the best costumes with a 9.75 average for the four judges. In the other categories Bring it on girls captured the top averages.

Last week I noted some correlation differences among the three judges.

Correlations Leilani Dana Kiyoshi
Leilani 1
Dana 0.748 1
Kiyoshi 0.735 0.655 1

I correlated well with Leilani, and she correlated moderately well to Kiyoshi. The items that correlated Leilani and I were not those that correlated Dr. Umezu and I, and we saw a lower correlation. This pattern occurred again tonight.

Correlations Dela Cruz Ichikawa Lee Ling Umezu
Dela Cruz 1
Ichikawa 0.19 1
Lee Ling 0.88 −0.10 1
Umezu 0.18 0.58 −0.07 1

Dela Cruz and I were highly correlated, we strongly concurred. Dela Cruz was only weakly correlated, at best, to Ichikawa and Umezu. I was not correlated to Ichikawa and Umezu, with a relationship between our scores that was no better than random. Ichikawa and Umezu were only moderately correlated.

A more detailed study of the correlations by criteria suggests that the four judges concurred on costume and performance, saw some limited agreement on movement, and disagreed on facial and practice criteria. Of interest was that the disagreements did not always occur between the same judges in those criteria where disagreement occurred.

Ultimately the rubric is open to interpretation. There was no training on the rubric nor were any of the judges professionally trained in dance. The goal was to identify groups that could be called upon to dance in dinner shows for visitors and guests. What might impress a tourist is not necessarily the same as what is likely to impress a professional dance judge. I suspect the judges achieved the goal desired despite some issues of internal inconsistency and differing interpretations.

The greatest divergence of scores seemed to be around G-Babes. I was left wondering whether there are cultural differences that may impact how one views the G-Babes.

G-Babes dancers
G-Babes dancers

The G-Babes were the youngest dancers dancing, maybe five or six years old, maybe the eldest is seven or thereabouts. I wondered about the G-Babes myself, but during the free dance sessions they were the first to stream out onto the grass and they danced and laughed with such enthusiasm. They were having the time of their lives. They clearly loved to “shake it” and enjoyed being out there with the “big girls”.

Pacific island free dance
Pacific island free dance

One cannot appreciate their diminutive stature compared to the other dancers until one sees them next to a Sista Sista dancer or a Bring It On Girls dancer as in the above photo.  Whatever differences the judges perceived, the G-Babes were clearly the crowd favorite. Ferocious amounts of cuteness, clearly well practiced dance routines, and having fun.

Sista Sista, Ohmine
Sista Sista, Ohmine

That said, what does the visiting tourist come to see at a dinner dance show? What expectations are there? All of the dances tonight were Polynesian, and only Kapinga Pride is of Polynesian heritage. The true dances of Micronesia are wonderful and awesome, but nothing like what a tourist imagines. Besides, to some extent the Micronesian dances are reserved to their cultures. They are dances with meanings and cultural import. For the Micronesian dancers, the Polynesian dances are both what the tourist expects and what is culturally more comfortable to deliver up to foreigners. That is only my opinion, but if accepted, then groups such as Bring It On Girls and Sista Sista are the dinner show dance groups.

As I noted last week, I know some of the dancers, their families, and where they from on the island. I was impressed with all of the groups. Everyone clearly had gone home and put in a lot of solid practice. The costumes were also stepped up and improved. One could see a lot of work, effort, and pride had gone into each group’s preparation. I was certainly proud of all the dancers tonight.

Kaycie Dikepa and Yolanie Lucky
Kaycie Dikepa and Yolanie Lucky, Kapinga Pride

Motor learning

I once wrote about the learning curve for learning to ride a RipStik and my own penchant for teaching whatever skills I have learned. About 18 months ago a five year old learned to ride a RipStik on our porch and then she left for another island. She had not seen a RipStik for 18 months.

Fresh off the airplane she did not seem to remember me nor the times we spent together a year and a half ago.  Upon reaching the house she saw the  RipStik and immediately took to trying to ride it. After a couple failed attempts, she was back up and riding.

18 months cold, the motor memories remain encoded
18 months cold, the motor memories remain encoded

Whatever the mechanism for this long term motor memory, it is rather amazing given that much of the rest of her world of 18 months earlier is for the most part forgotten.

Counting pillars and posts

The students were paired off on day one and told to count the number of pillars and posts on campus. The only definition was that the pillar or post should be standing free from a structure – a not a wall element.

Counting pillars
Counting pillars

I did not mention how to count the two-story tall pillars for the A and B buildings. This led to a natural introduction of the importance of definitions. The counting took 25 minutes, some finishing faster, singular groups slower.

M8 M9 M10
275 258 266
228 261 281
152 281 283
312 286 285
316 298 285
305 293
309 296
310 326
sample size n 5 8 8
275 unspecified how A, B bldg pillars counted
258 A, B, pillars counted as one top to bottom
305 A, B pillars counted as two top to bottom

A good sample description must include clear definitions. In the 9:00 class the students noted that they were counting the pillars differently for the A and B building. Some counted the pillar top to bottom as one pillar, some counted “per floor” and wound up with double the number of pillars. I suspect this happened at 8:00 as well.

Nancyleen tallies data

Note that the 152 is an outlier: that group did not go all the way to the cafeteria – they each thought the other one had gone there. This provided a chance to note that z-scores can be used to help identify outliers that could potentially be data errors.

General education core science assessment

In early August 2009 I was tasked with setting up an initial assessment of the following two program learning outcomes:

Outcome 3.4: Define and explain scientific concepts, principles, and theories of a field of science.
Outcome 3.5: Perform experiments that use scientific methods as part of the inquiry process.

A 20 August 2009 worksheet number two document records the evaluation question as “Can students demonstrate understanding and apply scientific concepts and principles of a field of science and apply scientific methods to the inquiry process?”

The chosen method of evaluation would be to collect laboratories from three courses, SC 120 Biology, SC 117 Tropical Pacific Island Environment, and SC 130 Physical Science, as these courses were being taught system-wide and were felt to well represent the general education science laboratory experience.

A rubric was developed during the fall of 2009 based on one in use at McKendree University (page no longer extant 2011).

Performance factor score
4 3 2 1
Metric: Scientific Procedures and reasoning
Accurately and efficiently used all appropriate tools and
technologies to gather and analyze data
Effectively used some appropriate tools and technologies to gather
and analyze data with only minor errors
Attempted to use appropriate tools and technologies but information
inaccurate or incomplete
Inappropriate use of tools or technology to gather data
Metric: Strategies
Used a sophisticated strategy and revised strategy where
appropriate to complete the task; employed refined and complex
reasoning and demonstrated understanding of cause and effect;
applied scientific method accurately
Used a strategy that led
to completion while recording all data; used effective scientific
reasoning; framed or used testable questions, conducted experiment,
and supported results with data
Used a strategy that led to partial completion of the task/
investigation; some evidence of scientific reasoning used;
attempted but could not completely carry out testing, recording all
data and stating conclusions
No evidence of procedure or scientific reasoning used; so many
errors, task could not be completed
Metric: Scientific communication/using data
Provided clear, effective explanation detailing how the task was
carried out; precisely and appropriately used multiple scientific
representations and notations to organize and display information;
interpretation of data supported conclusions and raised new
questions or was applied to new contexts; disagreements with data
resolved when appropriate
Presented a clear explanation; effectively used scientific
representations and notations to organize and display information;
appropriately used data to support conclusions
Incomplete explanation; attempted to use appropriate scientific
representations and notations, but were incomplete; conclusions not
supported or were only partly supported by data
Explanation could not be understood; inappropriate use of
scientific notation; conclusion unstated or data unrecorded
Metric: Scientific concepts and related content
Precisely and appropriately used scientific terminology; provided
evidence of in-depth, sophisticated understanding of relevant
scientific concepts, principles or theories; revised prior
misconceptions when appropriate; observable characteristics and
properties of objects, organisms, and/or materials used; went beyond
the task investigation to make other connections or extend thinking
Appropriately used scientific terminology; provided evidence of
understanding of relevant scientific concepts, principles or
theories; evidence of understanding observable characteristics and
properties of objects, organisms, and/or materials used
Used some relevant scientific terminology; minimal reference to
relevant scientific concepts, principles or theories; evidence of
understanding observable characteristics and properties of objects,
organisms, and/or materials used
Inappropriate use of scientific terminology; inappropriate
references to scientific concepts, principles or theories

Laboratory assignments were collected from a single laboratory in each of the three courses system wide. A team of science faculty members marked the laboratories with no faculty marking their own laboratories.

In a worksheet number three document that this author received on 05 August 2010, the following findings were reported. Note that scores were the sum of two readers, hence the four point scale above is reported as an eight point scale below. Four metrics, four points maximum, two readers yields the total possible of 32 points.

1b. Summary of Assessment Data Collected (3-9):

  • Overall average points on lab reports was 14.95 out of 32 possible points.
  • Scientific Procedures & Reasoning = 3.89 out of 8 possible points.
  • Strategies = 4.05 out of 8 possible points.
  • Scientific communication using data = 3.38 out of 8 possible points
  • Scientific concepts and related content = 3.63 out of 8 possible points

The following plans were made to seek improvement in the 2010-2011 academic year.

1c: Use of Results to Improve Program/Unit Impact/Services[Closing the loop] (3-10):
Increase the students’ ability to demonstrate understanding and apply scientific concepts and principles of a field of science and apply scientific methods to the inquiry process by increasing the overall rating from 14. 95 to at least 16.

  • Have students write more lab reports than the reported 1 – 3.
  • Collaborate with Lang/Lit, and English divisions to help prepare students to write scientific papers.
  • Provide science faculty with ways to help students write better lab reports.

A questionnaire had accompanied the request to collect laboratories.

2a. Means of Unit Assessment & Criteria for Success:
Students were asked [on a cover page], “How confident are you in writing this lab report?”

2b. Summary of Assessment Data Collected:
Only one group of students were not asked to complete the cover page for this part of the assessment. 50 of the 60 students assessed completed the form with the following results:

  • 33 (66%) of the 50 students reported that they felt confident or very confident in completing the lab report.
  • 7 (14%) of the 50 students reported that they were nervous, unsure, or uncomfortable in writing a lab report.
  • Confidence of students doesn’t match with low ratings

In a workshop on or about 05 August 2010, a decision was made to repeat the science assessment activity with the same courses and the same rubric. The above results were restated as three multi-part comments on planning worksheet two completed on 05 August 2010:

Comment 1. Increase the students’ ability to demonstrate understanding and apply scientific concepts and principles of a field of science and apply scientific methods to the inquiry process by increasing the overall rating from 14. 95 to at least 16 by:
C1a. Having students write more lab reports than the reported 1 – 3.
C1b. Collaborating with Lang/Lit, and English divisions to help prepare students to write scientific papers.
C1c. Providing science faculty with ways to help students write better lab reports.

Comment 2. Only one group of students were not asked to complete the cover page for this part of the assessment. Make sure to collect the same data for all lab reports submitted when the project is run again.
C2a. Repeat directions more than a few times.
C2b. Remind assignment administrators again just before distributing the assignment·

Comment 3. Increase student confidence in their writing.
C3a. Provide immediate feed back
C3b. Find ways for expectations from instructors to be clear to students such as providing students with rubrics before the assignment is due.
C3c. Provide more opportunities for scientific writing.

The above comments (recommendations) were to be circulated to all science faculty. A new assessment coordinator came on board in this same time frame. Laboratories were requested in the fall of 2010 and submitted to the assessment coordinator who later assembled a marking team. This author received a worksheet number three on 03 August 2011. The worksheet had a creation date of 18 July 2011.

The first item on worksheet number three appears to have been an addition made by the assessment coordinator at that time as the metric was not called for in the August 2010 worksheet number two document. The first item reported on course completion rate information. The metric noted that a 70% completion rate was expected for students in the courses.

A table in the report, reproduced below, reported on the completion rate.

Course Completion Rate for Fall 2010
Row Labels Enrollment Total Passed Completion Rate
SC117 57 53 93%
SC120 105 77 73%
SC130 51 46 90%

The definition of completion was not fully defined, the worksheet only noted that, “Criteria for success: General Weighted Mean of 70% or higher.”

The common science laboratory assignment was reported in section 2a and 2b. The data was reported in a slightly different manner than the prior year. The data reported was:

2b. Summary of Assessment Data Collected:
Overall average score on science assignment. 2009 – 14.95 2010 – 21.9

2009 Total

2010 Total













While all campuses saw scores improve, there was no report on whether implementation of the comments (recommendations) had occurred. Thus there was no way to determine whether the earlier worksheet two recommendations were causative of the score improvement.

A second potential complication was that the marking teams were not the same for the two years, thus there was the possibility that the two teams had interpreted the rubric in different manners.

Score improvements were seen in all four metrics when looked at system-wide:

Sc Proc & Reasoning


Sc Comm/Using Data

Sc Concepts & Related Content











Again, ferreting out meaning, whether score improvement was due to specific actions taken by instructors, random change, or a difference in the way the marking teams marked, was not determinable. Were more laboratories assigned? Did collaboration occur with language and literature instructors? Did instructors provide rubrics prior to assigning the laboratories? Was feedback as immediate as was reasonably possible?

Section 3a and 3b of the 18 July 2011 worksheet number three reported on student confidence.

3a. Means of Unit Assessment & Criteria for Success (3-8):
Students were asked, “How confident are you in completing the Science lab?”

3b. Summary of Assessment Data Collected (3-9):
Confidence level:

      2010    2009
High   43%    66%
Med.   10%
Low    12%    14%

Comments (made by the then assessment coordinator): Confidence levels on the low end are similar both years. Perhaps in 2010, students confidence at the high and medium levels is more in line with lab scores.

On the 05 August 2011 worksheet two was prepared for the 2011-2012 academic year. The recommendations on worksheet number two (listed as comments) were as follows:


The following recommendations derive from the 2009-2010 assessment cycle for general education science. During the 2010-2011 school year no data was gathered on whether these comments were implemented. Improvements seen in ratings might be due to rater bias – different raters were used year-on-year.

Comment 1. Increase the students’ ability to demonstrate understanding and apply scientific concepts and principles of a field of science and apply scientific methods to the inquiry process by increasing the overall rating from 14. 95 to at least 16 by:
C1a. Having students write more lab reports than the reported 1 – 3.
C1b. Collaborating with Lang/Lit, and English divisions to help prepare students to write scientific papers.
C1c. Providing science faculty with ways to help students write better lab reports.

Comment 2. Only one group of students were not asked to complete the cover page for this part of the assessment. Make sure to collect the same data for all lab reports submitted when the project is run again.
C2a. Repeat directions more than a few times.
C2b. Remind assignment administrators again just before distributing the assignment·

Comment 3. Increase student confidence in their writing.
C3a. Provide immediate feed back
C3b. Find ways for expectations from instructors to be clear to students such as providing students with rubrics before the assignment is due.
C3c. Provide more opportunities for scientific writing.

Comment 4. Document whether recommendations one to three were actually implemented system-wide and if not, what issues inveighed against implementation. Bear in mind that the comments/recommendations did not have system-wide buy-in.
C4a. Specific solutions yet to be determined. Possible use of a self-report survey.

Comment 5. Resolve the issue of consistency in marking of the laboratories year-on-year.
C5a. Specific solution yet to be determined.

Comment 6. Begin discussion of whether current assessment is providing useful, actionable information on accomplishment of program learning outcomes. Consider alternative assessment options for 2012-2013 school year.

Comments C4a and C5a were highlighted in the original document.

The absence of an assessment coordinator to help coordinate the tracking and collection of the meta-data required to report on whether the recommendations (comments) were implemented, and, if so, to what extent,  once again puts at risk the ability to interpret the raw data and to report on the recommendations in May 2012.

This article is intended to help improve institutional memory by gathering the key results of the general education core assessment for the past three years in one single place, along with the rubric being used. At present this information is scattered across multiple files that were exchanged only via email over a three year period. This end note is only intended to make plain that blogs are a useful way to report and make transparent assessment efforts, data, and results.

Teacher Corps Assessment

At the end of a week long mathematics and science workshop the 21 participants were asked to respond to the following questions. A report on the workshop exists as two blog articles, Teacher Corps and Teacher Corps II. Responses were obtained from 17 participants.

1. What was the most useful activity for you as a teacher?
2. What was the least useful activity for you as a teacher?
3. What was the most interesting activity?
4. What was the least interesting activity?
5. What was the most surprising experience during this past week?
6. What was the most fun?
7. What would you change if such a workshop was run in the future for teacher corps?

The table below is an excerpt for a larger table of responses. Responses were tallied and common responses were combined. The table includes only those responses which appeared three or more times. Note that respondents were permitted to cite more than one activity per question if they chose to do so.

Response Most useful Least useful Most interesting Least interesting Surprising Fun Sum
Plant names 4 3 2 7 3 19
Field trip 2 3 2 7 14
Floral litmus 1 5 1 2 9
None 7 2 9
All 6 6
Constructions 1 1 2 1 5
El Niño 1 1 2 1 5
Local materials 3 1 4
Speed of sound 1 3 4
Fibonacci ratio 1 1 1 3
Marble math 1 1 1 3

The “plant names” response refers to a number of walks on which plants were identified by the instructor in the local language of the participant. The participants did not all know their own plant names and many found this interesting and surprising.

A field trip to the Pwunso botanic garden to view spice plants, timber trees, and learn about the benefits of local foods from the Island Food Community garnered the most votes for being fun.

A laboratory that used boiled flowers to generate floral litmus solutions the most interesting activity for the participants, followed by a laboratory that determined the speed of sound using sticks from the forest, orally counted seconds, and echoes.

Six of the seventeen respondents felt that all of the activities engaged in were useful to them as teachers and as future teachers. Seven felt that none of the activities could be classified as least useful to them.

Constructing circles, triangles, squares, pentagons, and hexagons with a string and straight edge generated the strongest even split of all activities.

Although not shown in the excerpt above because the number of responses was only two, a side unit on a sound wave done in the computer laboratory, a unit on logic (categorical propositions, the square of opposition, and categorical syllogisms), and a batteries and bulbs activity were the only activities to receive more than one negative response.

The participants were also asked what they would change if the workshop were to ever be run again. The following responses are in descending order of popularity.

Move the start time back one hour from 8:00 to 9:00 (4 respondents)
Ensure lunch is arranged (4 respondents)
Run the workshop at dates that do not fall so close to a major holiday (3)
Cover how to prepare a science worksheet for lower grades (1)
Have more field trips (1)
Shorten the workshop day to four hours (1)
Extend the workshop to three weeks (1)
Spend more time outside (1)

Teacher Corps II

Thursday morning, day four of the workshop, opened with a focus on captivating students’ attention. No attention, no learning. Rather than say this up front, however, the concept was made concrete by putting a teacher, supported by two other teachers, on a RipStik caster board.

With a teacher standing on the board, the difference between dynamic and static stability was explained. Having a teacher being held up on the unstable, stationary board, focused the attention of at least the teacher on the board, if not the class.

With the definition illustrated, the concept was extended to climate change. If the global climate is essentially statically stable, then small perturbations in that system should engender nothing more than small, fairly stable changes in the global climate. If the global climate system is only dynamically stable, then small changes may have unexpected effects including potentially large changes as described in runaway climate change scenarios.

Following this presentation, the instructor used the RipStik to introduce waves. The RipStik leaves behind a distinctive wave on the paper. The wave form provides an opportunity to introduce terminology such as crest, trough, wavelength, and amplitude. The RipStik also makes concrete frequency as being the number of “wiggles” per second.

Dana on a RipStik laying down a waveform
Dana on a RipStik laying down a waveform

Rapid wiggling generates a high frequency (big), short wavelength (small). Slow wiggling generates a low freqency (small),  long wavelength (big). Thus the caster board well demonstrates the inverse relationship between wavelength and frequency that is seen in many systems.

Best of all, for the caster board the wave speed (frequency times wavelength) is exactly the linear board speed.

Images of the tracks with labeled features were illustrated in an article written by the workshop lead in October 2011. The activity is used in conjunction with a unit on waves in physical science.

The board ridden on paper on concrete provides a way to bring wave phenomenon down into earlier grades below the high school level. The boards do cost money, and one has to either ride the board or have a rider, yet there are a fair number of young riders even here on Pohnpei and thus it might be an option for a teacher. Simply have the student ride their board across the paper.

Inside the classroom transverse waves on a length of chain and longitudinal waves in a Slinky spring were demonstrated.

Following the Thursday morning break, the 10:00 session started with geometric math standard 2.31 recognize common shapes. But in a twist on standard 2.8.1, all shapes were constructed using only a length of string as a compass and a meter stick from the forest. The meter sticks had been built on Monday. Constructions based on these limitations are well covered by Zef Damen.

Constructions started with a circle and moved on to equilateral triangles, hexagons, squares, and finally a pentagon. The proof of the Pythagorean theorem was also presented, along with proof of the irrationality of the square root of two. This last fact was problematic for the Pythagoreans and their math system that effectively postulated all one needs to do mathematics are “marbles and pompoms“, along with ratios of marbles and pompoms.

Rustem analyzes acids and bases, Trevor on the right
Rustem analyzes acids and bases, Trevor on the right

In the background above the pentagon/pentagram construction can be glimpsed, on the far right is part of the Pythagorean proof.

The square root of two is not, however, expressable in the Al Mat marbular system, much to the consternation of the Pythagoreans. Thousands of years later Cantor would show that the infinity of irrational numbers is a higher order infinity than that of integers.

By the end of the session the class had moved up from 2.3.1 recognize common shapes and 2.4.1 identify and classify shapes up past 2.8.1 construcitons, 2.8.4 Pythagorean theorem,  1.8.3 square roots, and on into a presentation on the proof of the irrationality of the square root of two.

After the lunch break the class spent a half an hour in the computer laboratory were a fourier sound applet was demonstrated. The applet showed the connection between wavelength and frequency for sound waves, along with a graphical representation of a sound wave.

Then the teacher moved downstairs to engage in a laboratory using floral litmus solutions to detect acids and bases. This was based directly on physical science laboratory thirteen.

Yasko with acid, base, and neutral detection
Yasko with acid, base, and neutral detection

The session served science standards Sci 1.hs.1 and the Sc hs benchmark chemistry bullet item number eighteen, the study of acids, bases and salts.

Benskin with test tubes

The laboratory also demonstrated the use of minimal glassware and locally available materials including common household chemicals in a chemistry experiment.

Cheryl tests a floral litmus solution against a known base
Cheryl tests a floral litmus solution against a known base

In the final session of the day which began at 3:00 in the afternoon, the teachers returned to the computer laboratory where science outcomes 3.4.2, 3.5.2, El Niño, La Niña, tropical storm formation, and climatic patterns were presented using presentations put together by Chip Guard of the National Weather Service on Guam. The instructor owes a deep debt of thanks to Chip Guard and the NWS for sharing those presentations.

Friday morning began with a discovery learning session using batteries and bulbs. This particular exercise derives from physical science laboratory 12 and served FSM science learning outcome 2.8.7. A Pohnpei Utility Corporation bill was also worked.

Lorleen measures Deffeny
Lorleen measures Deffeny

Following this session, the class determined their FiboBelly ratio, an exercise selected from a series of exercises centered on numeric patterns including the Fibonacci sequence.

Yasko measures Benskin
Yasko measures Benskin

This exercise is often done in conjunction with Fibonacci factors and Pigonacci. Pigonacci includes the connection between the Fibonacci numbers and Pascal’s triangle.

FiboBelly ratios on the white board
FiboBelly ratios on the white board

The final white board with the teachers’ Fibobelly ratios.

After the morning sessions the teachers prepared workshop evaluations and assessment. Some of the teachers chose to work in the computer laboratory.

A204 Computer laboratory
A204 Computer laboratory

The workshop wrapped up on Friday the 23rd of December with a pizza luncheon and certificates of completion for all participants.

As an addendum to the Thursday afternoon presentation, the following is an account of the damage done by typhoon Lola in November 1957 held in the Pacific Digital Library.

Typhoon Lola Pays A Call

A most unladylike intruder by the name of Lola paid a call upon the Trust Territory in mid-November 1957.

Lola was a typhoon of major proportions. Sweeping along like a bulldozing
broom, she smashed down valuable breadfruit and coconut trees, submerged crops, wrecked homes and generally produced havoc as she rolled on from the Marshalls through Ponape, Truk, Guam, and up to

The typhoon which caused more over-all damage than any previously recorded within the territory, brought no loss of life and no major bodily injuries as far as is known, although many times tragedy knocked hard and close. In the face of danger, numerous spontaneous acts of valor came to the fore.

Lola entered Ponape District on November 12, leaving havoc, destruction and debris as she whirled on her way. Not for fifty years had Ponape had a typhoon. It was generally considered to be out of the typhoon path. But reports from atolls and islands throughout the area repeated the story of coconut and breadfruit trees destroyed, and of food shortage imminent after the windfall of nuts on the ground would have been made into copra or consumed for food, and the breadfruit eaten.

Kolonia, the Ponape District center, was in the direct path of the storm, as were the islands immediately around it. Knowing that the typhoon was coming, the people of Kolonia took shelter in the hospital building and warehouse, District Administration office, Intermediate School, agriculture station, and in churches and other buildings of the religious missions. For
some 250 or more storm refugees in these shelters, C-rations (individual canned foods), rice and sugar were issued by the Administration, also small quantities of kerosene to provide fuel for the ranges on which people prepared hot food and beverages.

The damage to buildings and utilities at Kolonia was considerable. Destroyed were the temporary warehouses and carpenter shed on the site of the new Pacific Islands Central School, and ruined was all of the
bagged cement therein, a total loss representing some five thousand dollars.
Ponape’s power and telephone systems were heavily hit by falling trees; roads were eroded, and bridges and culverts damaged, with a loss of approximately thirty-three thousand dollars in government property alone.

In addition to buildings damaged and public works systems affected, four vessels went aground in the bay – all privately owned. These were the LUCKY, the CULVER, the MARU, and the ASCOY. All except the first were expected to be refloated. The LUCKY, which was directly hit and forced high onto the reef, was not thought to be salvable.

Cacao pod production in Ponape was reduced by at least fifty per cent by the typhoon, according to estimates, and copra production here also is diminishing as a result of the high winds which blew immature nuts to the ground, or weakened them so that they began falling off before