That information is stored inimages. The latter, as mentioned, is a list of vec-
tors. We are interested in the vector corresponding toch:
> head(m2cx$images[["ch"]])
Ch char Can Man Can cons Can sound Can tone Man cons Man sound Man tone
613 嗅chau3 xiu4 ch au3xiu4
982 尋cham4 xin2 ch am4xin2
1050 巡chun3 xun2 ch un3xun2
1173 徐chui4 xu2 ch ui 4 x u 2
1184 循chun3 xun2 ch un3xun2
1566 斜 che4 xie2 ch e4xie2
Now, let’s look at the code. Before viewing the code formapsound()
itself, let’s consider another routine we need for support. It is assumed
here that the data framedfthat is input tomapsound()is produced by merg-
ing two frames for individual fangyans. In this case, for instance, the head
of the Cantonese input frame is as follows:
> head(can8)
Ch char Can
1 一 yat1
2 乙yuet3
3 丁ding1
4 七chat1
5 乃naai5
6 九 gau2
The one for Mandarin is similar. We need to merge these two frames
intocanman8, seen earlier. I’ve written the code so that this operation not
only combines the frames but also separates the romanization of a charac-
ter into initial consonant, the remainder of the romanization, and a tone
number. For example,ding1is separated intod,ing, and 1.
We could similarly explore transformations in the other direction, from
Cantonese to Mandarin, and involving the nonconsonant remainders of
characters. For example, this call determines which characters haveeung
as the nonconsonant portion of their Cantonese pronunciation:
> c2meung <- mapsound(canman8,c("Can cons","Man cons"),"eung")
We could then investigate the associated Mandarin sounds.
Here is the code to accomplish all this:
1 # merges data frames for 2 fangyans
2 merge2fy <- function(fy1,fy2) {
3 outdf <- merge(fy1,fy2)
4 # separate tone from sound, and create new columns
5 for (fy in list(fy1,fy2)) {
6 # saplout will be a matrix, init cons in row 1, remainders in row
Data Frames 117